Re: Odd incident with stgpool nextpool/migration

2022-11-18 Thread Deschner, Roger Douglas
Completely normal, working as designed. We do this on purpose. Depending on how 
you set the migration and reclamation thresholds, that first FILE storage pool 
will be internally reclaimed before any migration starts. That reclamation 
process will be labeled "Migration". (This has tripped me up on a Dark And 
Stormy Night when something broke and I had to figure it out, until I realized, 
"I did this".)

To stop this, set the first stgpool to have RECLAIM=100. An alternative is to 
set RECLAIMSTGPOOL to be your second storage pool (the Dell Powervault), though 
in that case these reclamation processes will still be called Migration.

Roger Deschner
University of Illinois at Chicago
"I have not lost my mind. It is backed up on tape somewhere"


From: Zoltan Forray 
Sent: Tuesday, November 15, 2022 18:04
Subject: Odd incident with stgpool nextpool/migration

ISP RHEL Linux 8.1.14.200

We just had an odd incident occur and question if this a "feature", "new
restriction" or a "bug".

We have a primary disk stgpool (regular FILE DEVCLASS no DEDUPE) with a
NEXTPOOL to an NFS mount stgpool (also regular FILE DEVCLASS no DEDUPE).
The disk pool was filling faster than it could migrate.

We recently resurrected an old 200TB Dell Powervault attached to this
server (regular FILE DEVCLASS WITH DEDUPE and NEXTPOOL is tape) and as an
"emergency" measure, decided to change the disk stgpool NEXTPOOL to point
to the Powervault.

Instead of redirecting migration to the Powervault, the migration tasks are
migrating BACK INTO the disk stgpool itself, which is at 90%

Process Number: 2,451
Process Description: Migration
Process Status: Volume */tsmpool01/00059802.BFS* (storage pool TSMSTG),
[snippage for brevity] Current input volume: */tsmpool01/00059802.BFS.*
Current output volume(s): */tsmpool01/0005A027.BFS.*

While I realize that technically this is called "Reclamation",
#1 - it is odd that the process is still being called MIGRATION
#2 - it isn't doing what we said - migrate data to another stgpool - not
the same one that has the issue/shortage?

Is this a new restriction that you can't NEXTPOOL from a NON-dedupe to a
DEDUPE stgpool?  Or have we found a bug?
--
*Zoltan Forray*
Enterprise Backup Administrator
VMware Systems Administrator
Enterprise Compute & Storage Platforms Team
VCU Infrastructure Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
<https://adminmicro2.questionpro.com>


Odd incident with stgpool nextpool/migration

2022-11-15 Thread Zoltan Forray
ISP RHEL Linux 8.1.14.200

We just had an odd incident occur and question if this a "feature", "new
restriction" or a "bug".

We have a primary disk stgpool (regular FILE DEVCLASS no DEDUPE) with a
NEXTPOOL to an NFS mount stgpool (also regular FILE DEVCLASS no DEDUPE).
The disk pool was filling faster than it could migrate.

We recently resurrected an old 200TB Dell Powervault attached to this
server (regular FILE DEVCLASS WITH DEDUPE and NEXTPOOL is tape) and as an
"emergency" measure, decided to change the disk stgpool NEXTPOOL to point
to the Powervault.

Instead of redirecting migration to the Powervault, the migration tasks are
migrating BACK INTO the disk stgpool itself, which is at 90%

Process Number: 2,451
Process Description: Migration
Process Status: Volume */tsmpool01/00059802.BFS* (storage pool TSMSTG),
[snippage for brevity] Current input volume: */tsmpool01/00059802.BFS.*
Current output volume(s): */tsmpool01/0005A027.BFS.*

While I realize that technically this is called "Reclamation",
#1 - it is odd that the process is still being called MIGRATION
#2 - it isn't doing what we said - migrate data to another stgpool - not
the same one that has the issue/shortage?

Is this a new restriction that you can't NEXTPOOL from a NON-dedupe to a
DEDUPE stgpool?  Or have we found a bug?
--
*Zoltan Forray*
Enterprise Backup Administrator
VMware Systems Administrator
Enterprise Compute & Storage Platforms Team
VCU Infrastructure Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
<https://adminmicro2.questionpro.com>


Re: TSM 6.2.3 data migration

2022-09-09 Thread Julien Sauvanet
Can you check whether you have the NOMIGRRECLset in your dsmserv.opt ?
You may have followed this procedure but forgot to remove the entries  ?
https://www.ibm.com/support/pages/recommended-dsmserv-restore-db-point-time-procedure


Cordialement / Kind regards

Julien Sauvanet 
He/Him/His
Technical Advisor Spectrum Protect/Spectrum Protect Plus EMEA


Smarter-Essential-Trusted
Technical Advisor Information Website

Phone: +33 6 76 75 43 95
Email: sauva...@fr.ibm.com
Webex: https://ibm.webex.com/meet/sauvanet

Compagnie IBM France
Siège Social : 17, avenue de l'Europe, 92275 Bois-Colombes Cedex
RCS Nanterre 552 118 465
Forme Sociale : S.A.S.
Capital Social : 663.779.730,90 €
SIRET : 552 118 465 03644 - Code NAF 6203Z

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of D'Antonio, 
Vincent E. III [US-US]
Sent: Friday, September 9, 2022 2:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXTERNAL] [ADSM-L] TSM 6.2.3 data migration

Good Morning,

Had an issue with TSM a few weeks ago and did a DB restore.  I had to drop the 
TSM DB before I could do the restore and now that the restore was done migrate 
and reclamations do not run.

I have had several looking but we can not figure out what is going on.  Anyone 
know what I can check or how to correct?

Vincent D'Antonio

Unless otherwise stated above:

Compagnie IBM France
Siège Social : 17, avenue de l'Europe, 92275 Bois-Colombes Cedex
RCS Nanterre 552 118 465
Forme Sociale : S.A.S.
Capital Social : 664 069 390,60 €
SIRET : 552 118 465 03644 - Code NAF 6203Z


Re: [E] [ADSM-L] Migration Plans

2019-04-18 Thread Michael Stahlberg
HI all,

many thanks for all who sent an answer. It was very helpful for me.

thanks

regards

Michael Stahlberg



On 4/17/19 3:42 PM, Mondore, E Andrew wrote:
> We are also planning to migrate from AIX to Linux.  One thing to note is that 
> if you are using physical tape drives like we are, you must migrate to a 
> physical machine.  You cannot use a virtual machine such as VMware.
>
> +
> E. Andrew Mondore  Email:  mond...@rpi.edu
> Senior Systems Programmer  URL:  
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.rpi.edu_-7Emondore&d=DwIGaQ&c=udBTRvFvXC5Dhqg7UHpJlPps3mZ3LRxpb6__0PomBTQ&r=-0-r_o4RMF4NYke7vSCYCBjM7xJlyEdabThqplN58jk&m=mBYjzw0Y1lwpNqheaGELUmUylbYUlCAdPwXv2f16RF4&s=TDh0NFMkDeTw4NFV28gTwOOLimksOySwIsb9DYOw5vA&e=
> Rensselaer Polytechnic Institute
>
>


Migration Plans

2019-04-17 Thread Mondore, E Andrew
We are also planning to migrate from AIX to Linux.  One thing to note is that 
if you are using physical tape drives like we are, you must migrate to a 
physical machine.  You cannot use a virtual machine such as VMware.

+
E. Andrew Mondore  Email:  mond...@rpi.edu
Senior Systems Programmer  URL:  http://www.rpi.edu/~mondore
Rensselaer Polytechnic Institute




Re: Migration plans

2019-04-16 Thread Stefan Folkerts
I think it depends on the size/load on the server.
For smaller environments we even use the midrange read intensive SSD's in
raid 1 (2 of them) or sometimes even raid 5 with a spare and they run for
years, this works just fine for S en M blueprints.
For very intensive environments we tent to use the write intensive SSD's
and in raid 1 (L blueprints).

We run on SUSE Linux and this has been a very stable platform for many of
our customers and our own cloud platform.
With internal SSD's make sure you buy the best internal raid controller you
can get with the correct SSD upgrades, don't skimp on that part.

Size the active log correctly and place that on SSD as well, make
everything a bit bigger than need be, you want to have at least 10GB of
free space on the active log filesystem.




On Tue, Apr 16, 2019 at 11:16 AM Michael Stahlberg <
michael.stahlb...@de.verizon.com> wrote:

> HI all,
>
> after long working with ADSM/TSM and migrating from Solaris to AIX about
> 9 years ago, we are now asked to refresh our hardware again. It came out
> that our old backup server are the only AIX server we still have and we
> should thinking about migrating to linux. I'm not real happy about it,
> but I think that it will work. I have some more doubts about the
> hardware we should use. I know that we should have a lot of disks for
> the database, but our databases aren't very big (the biggest is about
> 400 GByte) and to get small disks is difficult. At the moment we have an
> AIX server with an external storage (DS3500) with a lot of disks. The
> plans are to migrate to a HP DL380 system with internal disks. The
> database should be on SSDs so we are not sure if we still should use a
> lot of disks, or if it makes sense to use only 2 or 4 disks mirrored
> with a spare disk, or if it might be useful to use Raid5. Has anybody
> experiences with it, please. The documentation I found about this is in
> my eyes not up to date and so I'm asking this group now, for their
> experiences about this.
> It will be also fine, if somebody did such a migration already. I read
> about export and import the database from AIX to Linux. Has somebody do
> this, please. How did it work?
>
> many thanks in advanced
>
> regards
>
> Michael Stahlberg
>


Migration plans

2019-04-16 Thread Michael Stahlberg
HI all,

after long working with ADSM/TSM and migrating from Solaris to AIX about
9 years ago, we are now asked to refresh our hardware again. It came out
that our old backup server are the only AIX server we still have and we
should thinking about migrating to linux. I'm not real happy about it,
but I think that it will work. I have some more doubts about the
hardware we should use. I know that we should have a lot of disks for
the database, but our databases aren't very big (the biggest is about
400 GByte) and to get small disks is difficult. At the moment we have an
AIX server with an external storage (DS3500) with a lot of disks. The
plans are to migrate to a HP DL380 system with internal disks. The
database should be on SSDs so we are not sure if we still should use a
lot of disks, or if it makes sense to use only 2 or 4 disks mirrored
with a spare disk, or if it might be useful to use Raid5. Has anybody
experiences with it, please. The documentation I found about this is in
my eyes not up to date and so I'm asking this group now, for their
experiences about this.
It will be also fine, if somebody did such a migration already. I read
about export and import the database from AIX to Linux. Has somebody do
this, please. How did it work?

many thanks in advanced

regards

Michael Stahlberg


Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
The problem was corrected but,  I have no idea how.

I stopped the migration and replication kicked off. I let the replication run 
for an hour or so while I was at lunch. When I got back I stopped replication 
and restarted the migration. I have no idea how but it used the correct volumes 
this time. 

If this makes any sense by all means please explain it to me.

I appreciate all the help.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Ron 
Delaware
Sent: Wednesday, September 21, 2016 12:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be if 
there was a loop, meaning your disk pool points to the tape pool as the next 
stgpool, and your tape pool points to the disk pool as the next stgpool. If 
your tape pool was to hit the high migration mark, it would start a migration 
to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert IBM Corporation | System Lab Services 
IBM Certified Solutions Advisor - Spectrum Storage IBM Certified Spectrum Scale 
4.1 & Spectrum Protect 7.1.3 IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" 
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" 



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  

Re: TSM Migration Question

2016-09-21 Thread Ron Delaware
Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be 
if there was a loop, meaning your disk pool points to the tape pool as the 
next stgpool, and your tape pool points to the disk pool as the next 
stgpool. If your tape pool was to hit the high migration mark, it would 
start a migration to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | System Lab Services
IBM Certified Solutions Advisor - Spectrum Storage
IBM Certified Spectrum Scale 4.1 & Spectrum Protect 7.1.3
IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" 
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" 



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>

Re: TSM Migration Question

2016-09-21 Thread Gee, Norman
Is it migrating or reclaiming at 70% as defined.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Wednesday, September 21, 2016 8:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Migration Question

Nothing out of the normal, the below is kind of odd but I'm not sure it has 
anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to migrate to 
the tapepool and now it's doing the same thing. Migrating to itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>       Low Mig Pct: 95
>   Migration Delay: 0
>    Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>    Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated F

Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
Nothing out of the normal, the below is kind of odd but I'm not sure it has 
anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to migrate to 
the tapepool and now it's doing the same thing. Migrating to itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>       Low Mig Pct: 95
>   Migration Delay: 0
>    Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>    Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 0
> Delay Period for 

Re: TSM Migration Question

2016-09-21 Thread Skylar Thompson
The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>    Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>    Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No
>  Processes For Identifying Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client
> Contains Data Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>      Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
>     Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 0
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 0
>  Reclamation in Progress?: No
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 08:38:58
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No
>  Processes For Identifying Duplicates:
> Duplicate Data Not Stored:
>    Auto-copy Mode: Client
> Contains Data Deduplicated by Client?: No
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.ED

Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
OLD STORAGE POOL

tsm: PROD-TSM01-VM>q stg ddstgpool f=d

Storage Pool Name: DDSTGPOOL
Storage Pool Type: Primary
Device Class Name: DDFILE
   Estimated Capacity: 402,224 G
   Space Trigger Util: 69.4
 Pct Util: 70.4
 Pct Migr: 70.4
  Pct Logical: 95.9
 High Mig Pct: 100
  Low Mig Pct: 95
      Migration Delay: 0
   Migration Continue: Yes
      Migration Processes: 26
Reclamation Processes: 10
Next Storage Pool: DDSTGPOOL4500
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?:
   Collocate?: No
Reclamation Threshold: 70
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed: 3,000
   Number of Scratch Volumes Used: 2,947
Delay Period for Volume Reuse: 0 Day(s)
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 4,560
 Reclamation in Progress?: Yes
   Last Update by (administrator): RPLAIR
Last Update Date/Time: 09/21/2016 09:05:51
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type: Threshold
  Overwrite Data when Deleted:
Deduplicate Data?: No
 Processes For Identifying Duplicates:
Duplicate Data Not Stored:
   Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No



NEW STORAGE POOL

tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d

Storage Pool Name: DDSTGPOOL4500
Storage Pool Type: Primary
Device Class Name: DDFILE1
   Estimated Capacity: 437,159 G
   Space Trigger Util: 21.4
 Pct Util: 6.7
 Pct Migr: 6.7
  Pct Logical: 100.0
 High Mig Pct: 90
  Low Mig Pct: 70
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool: TAPEPOOL
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?:
   Collocate?: No
Reclamation Threshold: 70
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed: 3,000
   Number of Scratch Volumes Used: 0
Delay Period for Volume Reuse: 0 Day(s)
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 0
 Reclamation in Progress?: No
   Last Update by (administrator): RPLAIR
Last Update Date/Time: 09/21/2016 08:38:58
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type: Threshold
  Overwrite Data when Deleted:
Deduplicate Data?: No
 Processes For Identifying Duplicates:
Duplicate Data Not Stored:
   Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

Can you post the output of "Q STG F=D" for each of those pools?

On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage 
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
> process, but I had to stop it.
>
> Now when I restart it the migration process it is migrating to the old 
> storage volumes instead of the new storage volumes. Basically it's just 
> migrating from one disk volume inside the ddstgpool to another disk volume in 
> the ddstgpool.
>
> It is n

Re: TSM Migration Question

2016-09-21 Thread Skylar Thompson
Can you post the output of "Q STG F=D" for each of those pools?

On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage 
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
> process, but I had to stop it.
>
> Now when I restart it the migration process it is migrating to the old 
> storage volumes instead of the new storage volumes. Basically it's just 
> migrating from one disk volume inside the ddstgpool to another disk volume in 
> the ddstgpool.
>
> It is not using the next pool parameter,  has anyone seen this problem before?
>
> I appreciate the help.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


TSM Migration Question

2016-09-21 Thread Plair, Ricky
Within TSM I am migrating an old storage pool on a DD4200 to a new storage pool 
on a DD4500.

First of all, it worked fine yesterday.

The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
process, but I had to stop it.

Now when I restart it the migration process it is migrating to the old storage 
volumes instead of the new storage volumes. Basically it's just migrating from 
one disk volume inside the ddstgpool to another disk volume in the ddstgpool.

It is not using the next pool parameter,  has anyone seen this problem before?

I appreciate the help.








Ricky M. Plair
Storage Engineer
HealthPlan Services
Office: 813 289 1000 Ext 2273
Mobile: 813 357 9673



_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disclosure, distribution, forwarding, printing, or copying of this email 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately and destroy all copies of the original message.


Re: AIX to Linux migration performance

2016-07-29 Thread Paul Zarnowski
Hi Bill,

It's going to depend a lot on your network speed, some on the type of disk you 
are using for your databases, and maybe a little on the CPU speeds of your 
servers.

We have not yet migrated any of our production TSM instances, but we have been 
doing some dry-run performance tests.  On one instance having a 680GB database, 
we ran three tests, as follows:

Test 1:
 source (AIX) database on 15K
 target (Linux) database on SSD
 source server having 10Gbs ethernet
 target server having 1Gbs ethernet
 Elapsed time was 15 hours, 45 minutes

Test 2:
 source database on SSD
 target database on SSD
 source server having 10Gbs
 target server having 1Gbs
 Elapsed time was 12 hours, 32 minutes  - SSD helped some

Test 3:
 source and target databases both on SSD
 source and target servers both having 10Gbs
 Elapsed time was 6 hours, 20 minutes - quite a bit better due to the 10Gbs 
ethernet

Statistics & dry-runs were done by David Beardsley, here at Cornell.
..Paul

At 12:57 PM 7/29/2016, Brian G. Kunst wrote:
>We just did this back in May. It took roughly 4 hours to do our 580GB DB.
>
>--
>Brian Kunst
>Storage Administrator
>Large Scale Storage & Systems
>UW Information Technology
>
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
>> Bill Mansfield
>> Sent: Friday, July 29, 2016 8:43 AM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: [ADSM-L] AIX to Linux migration performance
>>
>> Has anyone migrated a big TSM server from AIX to Linux using the
>> Extract/Insert capability in 7.1.5?  Have a 2.5TB DB to migrate, would
>> like some idea of how long it might take.
>>
>> Bill Mansfield


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: AIX to Linux migration performance

2016-07-29 Thread Brian G. Kunst
We just did this back in May. It took roughly 4 hours to do our 580GB DB.

-- 
Brian Kunst
Storage Administrator
Large Scale Storage & Systems
UW Information Technology

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Bill Mansfield
> Sent: Friday, July 29, 2016 8:43 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] AIX to Linux migration performance
> 
> Has anyone migrated a big TSM server from AIX to Linux using the
> Extract/Insert capability in 7.1.5?  Have a 2.5TB DB to migrate, would
> like some idea of how long it might take.
> 
> Bill Mansfield


Re: AIX to Linux migration performance

2016-07-29 Thread Robert J Molerio
I thought that functionality was long gone in 5.x?



Thank you,

Systems Administrator

Systems & Network Services
NYU IT/Technology Operations Services

212-998-1110

646-539-8239

On Fri, Jul 29, 2016 at 11:43 AM, Bill Mansfield <
bill.mansfi...@us.logicalis.com> wrote:

> Has anyone migrated a big TSM server from AIX to Linux using the
> Extract/Insert capability in 7.1.5?  Have a 2.5TB DB to migrate, would like
> some idea of how long it might take.
>
> Bill Mansfield
>


AIX to Linux migration performance

2016-07-29 Thread Bill Mansfield
Has anyone migrated a big TSM server from AIX to Linux using the Extract/Insert 
capability in 7.1.5?  Have a 2.5TB DB to migrate, would like some idea of how 
long it might take.

Bill Mansfield


Re: Migration should preempt reclamation

2016-02-18 Thread Rick Adamson
Roger
Worth mentioning is that the issue is most likely compounded by the 
migration/reclamation process running during a backup, the performance hit  to 
the TSM server can be severe.
You may find a significant reduction in task processing times if you do isolate 
them.
In the past I have temporarily suspended (or limited) reclamation schedules 
until the initial backups are done and sent the backup data straight to my 
sequential storage pool bypassing the migration process.


-Rick Adamson


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Wednesday, February 17, 2016 8:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Migration should preempt reclamation

I was under the impression that higher priority tasks could preempt lower 
priority tasks. That is, migration should be able to preempt reclamation. But 
it doesn't. A very careful reading of Administrator's Guide tells me that it 
does not.

We're having a problem with a large client backup that fails, due to a disk 
stgpool filling. (It's a new client, and this is its initial large
backup.) It fills up because the migration process can not get a tape drive, 
due to their all being used for reclamation. This also prevents the client 
backup from getting a tape drive directly. Does anybody have a way for 
migration to get resources (drives, volumes, etc) when a storage pool reaches 
its high migration threshold, and reclamation is using those resources? 
"Careful scheduling" is the usual answer, but you can't always schedule what 
client nodes do. Back on TSM 5.4 I built a Unix cron job to look for this 
condition and cancel reclamation processes, but it was a real Rube Goldberg 
contraption, so I'm reluctant to revive it now in the TSM 6+ era. Anybody have 
a better way?

BTW, I thought I'd give the 7.1.4 Information center a try to answer this. I 
searched on "preemption". 10 hits none of which were the answer.
So I went to the PDF of the old Administrator's Guide and found it right away. 
We need that book!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Migration should preempt reclamation

2016-02-18 Thread Richard Cowen
A similar script could use MOVE DATA instead of RECLAIM, which has the 
advantage of checking as each MOVE ends to see if there are drive resources, 
and intelligently picking a volume, before starting a new MOVE.  It also can 
check an external file for a "pause" or "halt" command, or parse the actlog(s) 
for ANR1496I messages for similar "commands", and re-read the configuration 
file so you can affect it without stopping/starting.
Richard



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Thursday, February 18, 2016 9:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Migration should preempt reclamation

This is a tough one.  On the one hand we want Reclamation to use as many tape 
drives as possible, but not consume them all.  We also have multiple TSM 
instances wanting library resources.  The TSM instances are blind to each 
others needs.  This _IS_ difficult to control.

The _current_ solution controls reclamation completely manually from a set of 
scripts. 
It works something like this:

- we run a library sharing environment across a bunch of TSM instances
- reclamation is set to never run automatically - all stgpools are set 
to not run reclamation automatically (reclamation pct = 100)
- define the max number of drives reclamation can use in a library
   (reclamation can use up to this number)
- define the number unused drives in a library that MUST be UNUSED before 
   another reclamation is started
   (there are always some number of unused drives available for non-reclamation 
jobs to start)
- define on stgpools the number of reclamation process allowed - we set it to 1 
   (one reclamation process equals 2 tape drives)

Late morning we kick in the script

- Crawls through all our TSM instances and gets a count of tapes per stgpool
that could be reclaimed (above some rec pct).
- Sorts the list of stgpools/counts by the count
- Scripts loops.  
On each loop it will start a new stgpool reclamation if:
  - max number of drives allowed for reclamation hasn't been hit 
  - required number of unused drives are still unused

Later in the day we kill this script, letting running reclamation jobs run to 
completion.
If buy the next morning (when migrations want to run) we still have 
reclamations running, they get nuked!

. . .repeat each day . . . .



The result, at a gross level we keep some number of drives open for other 
sessions/jobs to use, and yet allow reclamation to use up to the defined limit 
of drives if no one other processes are using them.  

It has major flaws, but has really smoothed out our tape drive contention and 
resources used for reclamation.  The one thing I really like is that it lets 
the stgpool with the most reclaimable tapes in whatever TSm instance to run the 
longest.

One core overall issue - no amount of playing around like this can make up for 
not having the resources you need to drive TSM.  If you don't have the drives 
to process the work, nothing will really help. 

Rick





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Wednesday, February 17, 2016 8:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Migration should preempt reclamation

I was under the impression that higher priority tasks could preempt lower 
priority tasks. That is, migration should be able to preempt reclamation. But 
it doesn't. A very careful reading of Administrator's Guide tells me that it 
does not.

We're having a problem with a large client backup that fails, due to a disk 
stgpool filling. (It's a new client, and this is its initial large
backup.) It fills up because the migration process can not get a tape drive, 
due to their all being used for reclamation. This also prevents the client 
backup from getting a tape drive directly. Does anybody have a way for 
migration to get resources (drives, volumes, etc) when a storage pool reaches 
its high migration threshold, and reclamation is using those resources? 
"Careful scheduling" is the usual answer, but you can't always schedule what 
client nodes do. Back on TSM 5.4 I built a Unix cron job to look for this 
condition and cancel reclamation processes, but it was a real Rube Goldberg 
contraption, so I'm reluctant to revive it now in the TSM 6+ era. Anybody have 
a better way?

BTW, I thought I'd give the 7.1.4 Information center a try to answer this. I 
searched on "preemption". 10 hits none of which were the answer.
So I went to the PDF of the old Administrator's Guide and found it right away. 
We need that book!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


-The information contained in this 
message is intended only for the 

Re: Migration should preempt reclamation

2016-02-18 Thread Skylar Thompson
We've had this problem as well. Our fix has been to define a maintenance
script MANUAL_RECLAIM that reclaims each storage pool in parallel, but with
a duration of 3 hours:

PARALLEL
RECL STG DESPOT-OFFSITE-LTO TH=60 DU=180 W=Y
...
SERIAL

An admin schedule will run the script every four hours, except on days we
do offsite media checkouts, so that other processes have a shot at grabbing
a tape drive. This actually solved an additional problem for us: that of
having copy volumes in drives as MOVE DRMEDIA is running.

On Wed, Feb 17, 2016 at 07:42:43PM -0600, Roger Deschner wrote:
> I was under the impression that higher priority tasks could preempt
> lower priority tasks. That is, migration should be able to preempt
> reclamation. But it doesn't. A very careful reading of Administrator's
> Guide tells me that it does not.
>
> We're having a problem with a large client backup that fails, due to a
> disk stgpool filling. (It's a new client, and this is its initial large
> backup.) It fills up because the migration process can not get a tape
> drive, due to their all being used for reclamation. This also prevents
> the client backup from getting a tape drive directly. Does anybody have
> a way for migration to get resources (drives, volumes, etc) when a
> storage pool reaches its high migration threshold, and reclamation is
> using those resources? "Careful scheduling" is the usual answer, but you
> can't always schedule what client nodes do. Back on TSM 5.4 I built a
> Unix cron job to look for this condition and cancel reclamation
> processes, but it was a real Rube Goldberg contraption, so I'm reluctant
> to revive it now in the TSM 6+ era. Anybody have a better way?
>
> BTW, I thought I'd give the 7.1.4 Information center a try to answer
> this. I searched on "preemption". 10 hits none of which were the answer.
> So I went to the PDF of the old Administrator's Guide and found it right
> away. We need that book!
>
> Roger Deschner  University of Illinois at Chicago rog...@uic.edu
> ==I have not lost my mind -- it is backed up on tape somewhere.=

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: Migration should preempt reclamation

2016-02-18 Thread Rhodes, Richard L.
This is a tough one.  On the one hand we want Reclamation to use as many tape 
drives as possible, but not consume them all.  We also have multiple TSM 
instances wanting library resources.  The TSM instances are blind to each 
others needs.  This _IS_ difficult to control.

The _current_ solution controls reclamation completely manually from a set of 
scripts. 
It works something like this:

- we run a library sharing environment across a bunch of TSM instances
- reclamation is set to never run automatically - all stgpools are set 
to not run reclamation automatically (reclamation pct = 100)
- define the max number of drives reclamation can use in a library
   (reclamation can use up to this number)
- define the number unused drives in a library that MUST be UNUSED before 
   another reclamation is started
   (there are always some number of unused drives available for non-reclamation 
jobs to start)
- define on stgpools the number of reclamation process allowed - we set it to 1 
   (one reclamation process equals 2 tape drives)

Late morning we kick in the script

- Crawls through all our TSM instances and gets a count of tapes per stgpool
that could be reclaimed (above some rec pct).
- Sorts the list of stgpools/counts by the count
- Scripts loops.  
On each loop it will start a new stgpool reclamation if:
  - max number of drives allowed for reclamation hasn't been hit 
  - required number of unused drives are still unused

Later in the day we kill this script, letting running reclamation jobs run to 
completion.
If buy the next morning (when migrations want to run) we still have 
reclamations running, they get nuked!

. . .repeat each day . . . .



The result, at a gross level we keep some number of drives open for other 
sessions/jobs to use, and yet allow reclamation to use up to the defined limit 
of drives if no one other processes are using them.  

It has major flaws, but has really smoothed out our tape drive contention and 
resources used for reclamation.  The one thing I really like is that it lets 
the stgpool with the most reclaimable tapes in whatever TSm instance to run the 
longest.

One core overall issue - no amount of playing around like this can make up for 
not having the resources you need to drive TSM.  If you don't have the drives 
to process the work, nothing will really help. 

Rick





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Wednesday, February 17, 2016 8:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Migration should preempt reclamation

I was under the impression that higher priority tasks could preempt
lower priority tasks. That is, migration should be able to preempt
reclamation. But it doesn't. A very careful reading of Administrator's
Guide tells me that it does not.

We're having a problem with a large client backup that fails, due to a
disk stgpool filling. (It's a new client, and this is its initial large
backup.) It fills up because the migration process can not get a tape
drive, due to their all being used for reclamation. This also prevents
the client backup from getting a tape drive directly. Does anybody have
a way for migration to get resources (drives, volumes, etc) when a
storage pool reaches its high migration threshold, and reclamation is
using those resources? "Careful scheduling" is the usual answer, but you
can't always schedule what client nodes do. Back on TSM 5.4 I built a
Unix cron job to look for this condition and cancel reclamation
processes, but it was a real Rube Goldberg contraption, so I'm reluctant
to revive it now in the TSM 6+ era. Anybody have a better way?

BTW, I thought I'd give the 7.1.4 Information center a try to answer
this. I searched on "preemption". 10 hits none of which were the answer.
So I went to the PDF of the old Administrator's Guide and found it right
away. We need that book!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


-
The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.


Re: Migration should preempt reclamation

2016-02-18 Thread David Ehresman
Could you reduce the "Reclamation Processes" count on the storagepool by one 
leaving a tape drive free for migration?

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Wednesday, February 17, 2016 8:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Migration should preempt reclamation

I was under the impression that higher priority tasks could preempt
lower priority tasks. That is, migration should be able to preempt
reclamation. But it doesn't. A very careful reading of Administrator's
Guide tells me that it does not.

We're having a problem with a large client backup that fails, due to a
disk stgpool filling. (It's a new client, and this is its initial large
backup.) It fills up because the migration process can not get a tape
drive, due to their all being used for reclamation. This also prevents
the client backup from getting a tape drive directly. Does anybody have
a way for migration to get resources (drives, volumes, etc) when a
storage pool reaches its high migration threshold, and reclamation is
using those resources? "Careful scheduling" is the usual answer, but you
can't always schedule what client nodes do. Back on TSM 5.4 I built a
Unix cron job to look for this condition and cancel reclamation
processes, but it was a real Rube Goldberg contraption, so I'm reluctant
to revive it now in the TSM 6+ era. Anybody have a better way?

BTW, I thought I'd give the 7.1.4 Information center a try to answer
this. I searched on "preemption". 10 hits none of which were the answer.
So I went to the PDF of the old Administrator's Guide and found it right
away. We need that book!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Migration should preempt reclamation

2016-02-17 Thread Roger Deschner
I was under the impression that higher priority tasks could preempt
lower priority tasks. That is, migration should be able to preempt
reclamation. But it doesn't. A very careful reading of Administrator's
Guide tells me that it does not.

We're having a problem with a large client backup that fails, due to a
disk stgpool filling. (It's a new client, and this is its initial large
backup.) It fills up because the migration process can not get a tape
drive, due to their all being used for reclamation. This also prevents
the client backup from getting a tape drive directly. Does anybody have
a way for migration to get resources (drives, volumes, etc) when a
storage pool reaches its high migration threshold, and reclamation is
using those resources? "Careful scheduling" is the usual answer, but you
can't always schedule what client nodes do. Back on TSM 5.4 I built a
Unix cron job to look for this condition and cancel reclamation
processes, but it was a real Rube Goldberg contraption, so I'm reluctant
to revive it now in the TSM 6+ era. Anybody have a better way?

BTW, I thought I'd give the 7.1.4 Information center a try to answer
this. I searched on "preemption". 10 hits none of which were the answer.
So I went to the PDF of the old Administrator's Guide and found it right
away. We need that book!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Filespace level migration? was: breaking up tsm for ve backups into different nodes

2015-01-07 Thread Prather, Wanda
Followup, 
We upgraded a customer from 6.3.5 to 7.1.1.100 today (Windows TSM server).  
The disk pool where TSM/VE 7.1.1 data lands will now indeed do multiple 
concurrent migration processes out to a sequential pool, even though all the 
data is owned by one node. (It's migrating at the filespace level now; so you 
still get only 1 stream if only 1 filespace left to be migrated.)

Big help for our environment.

W
  

-Original Message-
From: Prather, Wanda 
Sent: Tuesday, December 16, 2014 3:33 PM
To: 'ADSM: Dist Stor Manager'
Subject: Filespace level migration? was: breaking up tsm for ve backups into 
different nodes

Didn't I read that there were supposed to be changes in the 7.1 server to make 
migration run at the filespace level? Which would make this a multi-thread 
process - anybody with experience?  I have a 6.3.5 customer with the same 
issue, but don't have a 7.1 customer in production to verify.


Wanda Prather
TSM Consultant
ICF International Enterprise and Cybersecurity Systems Division


 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Tuesday, December 16, 2014 3:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] breaking up tsm for ve backups into different nodes

We are backing up our vmware environment (approximately 110 vms), using tsm for 
ve 6.4.2 tsmserver 6.2.5 under redhat 6.1.

The backup destination is disk.  Trouble is all data is owned by the datacenter 
node; consequently, migration has only one stream.

I do not currently have enough tape drives to run directly to tape.
Is there a solution I have missed?


Re: EKM to SKLM Migration Path

2014-12-26 Thread Shawn Drew
Yes there is, but you need to upgrade EKM to to 2.1 if not already.  The SKLM 
install asks you to provide the location of the EKM db if migrating.
 
http://www-01.ibm.com/support/knowledgecenter/#!/SSWPVP_2.5.0/com.ibm.sklm.doc_2.5/cpt/cpt_ic_plan_migration.html?cp=SSWPVP_2.5.0%2F5-0-2
 
I would copy the EKM dir to a VM and test the upgrade separately.  I was even 
able to move from a Windows to a Linux server.


> On Dec 19, 2014, at 9:47 AM, Bill Boyer  wrote:
> 
> I have a customer that is still using the original Java based EKM for server
> up the LTO keys. The servers running the EKM software are being retired due
> to OS level. Is there a migration path to get my EKM keystore in to SKLM?
> The customer's last Passport renewal has them licensed for SKLM.
> 
> 
> 
> Any help or suggestions is very appreciated!
> 
> 
> 
> Bill Boyer
> DSS, Inc.
> (610) 927-4407
> "Enjoy life. It has an expiration date." - ??


EKM to SKLM Migration Path

2014-12-19 Thread Bill Boyer
I have a customer that is still using the original Java based EKM for server
up the LTO keys. The servers running the EKM software are being retired due
to OS level. Is there a migration path to get my EKM keystore in to SKLM?
The customer's last Passport renewal has them licensed for SKLM.



Any help or suggestions is very appreciated!



Bill Boyer
DSS, Inc.
(610) 927-4407
"Enjoy life. It has an expiration date." - ??


Filespace level migration? was: breaking up tsm for ve backups into different nodes

2014-12-16 Thread Prather, Wanda
Didn't I read that there were supposed to be changes in the 7.1 server to make 
migration run at the filespace level? Which would make this a multi-thread 
process - anybody with experience?  I have a 6.3.5 customer with the same 
issue, but don't have a 7.1 customer in production to verify.


Wanda Prather
TSM Consultant
ICF International Enterprise and Cybersecurity Systems Division


 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Tuesday, December 16, 2014 3:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] breaking up tsm for ve backups into different nodes

We are backing up our vmware environment (approximately 110 vms), using tsm for 
ve 6.4.2 tsmserver 6.2.5 under redhat 6.1.

The backup destination is disk.  Trouble is all data is owned by the datacenter 
node; consequently, migration has only one stream.

I do not currently have enough tape drives to run directly to tape.
Is there a solution I have missed?


Dedupe pool migration perf statistics to non-dedupe pool

2014-09-08 Thread Sachin Chaudhari
Does any one is using storage pool migration from de-dupe pool to non de-dupe 
pool? Am looking for some migration performance statistic so that I can compare 
the same with one of our customer environment, currently I am getting very poor 
performance in migration approx. 25 GB in 5 hrs, even copy pool performance is 
also in similar line.

Surely there are multiple things involved in performance, but I don't find any 
things which can be considered as baseline for data migration performance from 
de-dupe pool to non-dedupe (Tape) pool.

Any inputs on same will really help.

Thanks,
Sachin C.


Migration tape mount thrashing despite collocation

2013-08-05 Thread Roger Deschner
I've been going over logs of tape mounts, looking for a tape performance
issue. I watched the tape library for a while during migration, and the
robot was being kept very busy, mounting and dismounting the same set of
tapes over and over. When each tape volume is mounted and dismounted as
many as 6 times during a single migration, something is not right. Not
only is this rather slow, but it is wearing out the tapes.

Group collocation is set for both the DEVTYPE=FILE disk stgpool, and the
next one in the heirarchy which is tape. The collocation groups are
sensibly-sized - total size of each group is about (capacity of one tape
* number of tape drives), to optimize large restores.

However, I'm seeing this tape thrashing as migration proceeds. Why is
this happening with the same collocation in effect for both stgpools?
Shouldn't migration be optimized for tape mounts when the collocation
definitions are the same?

BTW, I looked over the tape mounts for a different server which migrates
from DEVCLASS=DISK (random, no collocation) to tape, and I saw the same
effect, though not quite as bad as DEVTYPE=FILE (sequential, group
collocation) to tape.

I have set MAXCAP on the devclass to 25gb. Would making that larger
help? I'm not sure it would, since the random-access stgpool acted
mostly the same way.

This is TSM 6.2 on AIX, with LTO-4 and LTO-5 tape.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: migration issues

2013-04-17 Thread Stef Coene
On Wednesday 17 April 2013 19:09:31 you wrote:
> Hi,
>
> the output is very interlaced and so not that easily readable. Which
> command did you use to migrate the data from one pool to the other one?
Either an update stgp with low and high parameter or migrate stgp triggers the
same result.

The strange thing is that I just started the migration as test and now the
correct storage pool is used!
I have no idea why, but it's working :)


Stef


Re: migration issues

2013-04-17 Thread Alex Paschal

Hi, Stef.  There is only a loose correlation between the devclass
directory and the volume directories.  Would you please post the output of:

q vol /home/app/tsm/diskpool/diskpool13/diskpool/297D.BFS


On 4/17/2013 2:42 AM, Stef Coene wrote:

Hi,

We have a strange storage pool problem.
POOL48 is next storage pool of DISKPOOL.
Both are from type FILE.

The output of q stgp is not very clear, but you can see that POOL48 is next
storage pool of DISKPOOL:
Storage Device  Estimated   Pct   Pct High Low
Next Stora-
Pool Name   Class Name   Capacity  Util  Migr  Mig Mig
ge Pool
Pct Pct
--- -- -- - -  ---
---
DISKPOOLFILE_DISK-   15.699 G  87,2  87,2  100  99
POOL48
  POOL
POOL48  FILE_POOL-9.878 G  29,5  29,5   90  70
  48

However, when we start migration, the data is written from pool DISKPOOL to
DISKPOOL  just like a reclamation!:

  442 MigrationVolume
/home/app/tsm/diskpool/diskpool16/diskpoo-
l/08C8.BFS (storage pool DISKPOOL),
Moved
Files: 73, Moved Bytes: 768,089,649,
Unreadable
Files: 0, Unreadable Bytes: 0. Current
Physical
File (bytes): 30,412,763 Current input
volume:

/home/app/tsm/diskpool/diskpool16/diskpool/-
08C8.BFS. Current output volume(s):

/home/app/tsm/diskpool/diskpool13/diskpool/-
297D.BFS.

The files in  /home/app/tsm/diskpool/ are all from DISKPOOL.  POOL48 has his
files in an other directory:

>From "q devcl file_pool48 f=d":
Directory: /home/app/tsm/pool48/diskpool20/diskpool

And of course DISKPOOL is never migrated to POOL48.
How can this happen?


Stef



Re: migration issues

2013-04-17 Thread Michael Roesch
Hi,

the output is very interlaced and so not that easily readable. Which
command did you use to migrate the data from one pool to the other one?

Regards,
Michael


On Wed, Apr 17, 2013 at 11:42 AM, Stef Coene  wrote:

> Hi,
>
> We have a strange storage pool problem.
> POOL48 is next storage pool of DISKPOOL.
> Both are from type FILE.
>
> The output of q stgp is not very clear, but you can see that POOL48 is next
> storage pool of DISKPOOL:
> Storage Device  Estimated   Pct   Pct High
> Low
> Next Stora-
> Pool Name   Class Name   Capacity  Util  Migr  Mig
> Mig
> ge Pool
>Pct
> Pct
> --- -- -- - - 
> ---
> ---
> DISKPOOLFILE_DISK-   15.699 G  87,2  87,2  100
>  99
> POOL48
>  POOL
> POOL48  FILE_POOL-9.878 G  29,5      29,5   90
>  70
>  48
>
> However, when we start migration, the data is written from pool DISKPOOL to
> DISKPOOL  just like a reclamation!:
>
>  442 MigrationVolume
> /home/app/tsm/diskpool/diskpool16/diskpoo-
>l/08C8.BFS (storage pool
> DISKPOOL),
> Moved
>Files: 73, Moved Bytes: 768,089,649,
> Unreadable
>Files: 0, Unreadable Bytes: 0.
> Current
> Physical
>File (bytes): 30,412,763 Current
> input
> volume:
>
>  /home/app/tsm/diskpool/diskpool16/diskpool/-
>08C8.BFS. Current output volume(s):
>
>  /home/app/tsm/diskpool/diskpool13/diskpool/-
>297D.BFS.
>
> The files in  /home/app/tsm/diskpool/ are all from DISKPOOL.  POOL48 has
> his
> files in an other directory:
>
> From "q devcl file_pool48 f=d":
> Directory: /home/app/tsm/pool48/diskpool20/diskpool
>
> And of course DISKPOOL is never migrated to POOL48.
> How can this happen?
>
>
> Stef
>


migration issues

2013-04-17 Thread Stef Coene
Hi,

We have a strange storage pool problem.
POOL48 is next storage pool of DISKPOOL.
Both are from type FILE.

The output of q stgp is not very clear, but you can see that POOL48 is next
storage pool of DISKPOOL:
Storage Device  Estimated   Pct   Pct High Low
Next Stora-
Pool Name   Class Name   Capacity  Util  Migr  Mig Mig
ge Pool
   Pct Pct
--- -- -- - -  ---
---
DISKPOOLFILE_DISK-   15.699 G  87,2  87,2  100  99
POOL48
 POOL
POOL48  FILE_POOL-9.878 G  29,5  29,5   90  70
 48

However, when we start migration, the data is written from pool DISKPOOL to
DISKPOOL  just like a reclamation!:

 442 MigrationVolume
/home/app/tsm/diskpool/diskpool16/diskpoo-
   l/08C8.BFS (storage pool DISKPOOL),
Moved
   Files: 73, Moved Bytes: 768,089,649,
Unreadable
   Files: 0, Unreadable Bytes: 0. Current
Physical
   File (bytes): 30,412,763 Current input
volume:
   
/home/app/tsm/diskpool/diskpool16/diskpool/-
   08C8.BFS. Current output volume(s):
   
/home/app/tsm/diskpool/diskpool13/diskpool/-
   297D.BFS.

The files in  /home/app/tsm/diskpool/ are all from DISKPOOL.  POOL48 has his
files in an other directory:

>From "q devcl file_pool48 f=d":
Directory: /home/app/tsm/pool48/diskpool20/diskpool

And of course DISKPOOL is never migrated to POOL48.
How can this happen?


Stef


SAP Migration - sanity check

2013-03-21 Thread Steven Langdale
Hi All

I need to migrate an SAP enviroment from one country to another - and would
appreciate a sanity check of my thoughts so far...

Current environ is SAP on Oracle on AIX, TSM (5.5) is on Windows.
Target environ is SAP on Oracle on AIX, TSM (5.5) is on AIX.

My bit is the db migration.  The customer wants to do a standard
brbackup/brrestore on their end.

With that in mind, I'm looking to do the brbackup then export to tape.
Ship the tapes, then import into TSM.  (The bandwidth & latency between the
sites will not support an export direct to the target).

I think it is, but is that doable from TSM on Windows to TSM on AIX?
Also, is there an easy way to encrypt the export tape? i.e. without
resorting to tape encryption and key managers etc.

Thanks

Steven


Re: Odd Migration Behaviour

2013-02-14 Thread Gee, Norman
What about placing this problem node into its own collocation group. Everything 
else should migrate.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of white 
jeff
Sent: Thursday, February 14, 2013 12:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Odd Migration Behaviour

Hi

Thanks for the response. There was a backup in progress, but it had only
backed up 7gb. There was about 1.5tb in the pool. The other clients had
finished backing up several hours earlier. Migs ran as expected, but
completed, leaving 1.5tb in the pool

I will find out tomorrow morning when i get back to the client site what
thye %migr and %util values are.

Thanks again

On 14 February 2013 17:55, Prather, Wanda  wrote:

> Do Q STGPOOL and look for %MIGR, "per cent migratable".
>
> If %MIGR is less than %UTIL, then there are chunks in the pool that aren't
> eligible and can't be migrated out yet, because there are transactions in
> progress (usually a backup still running).
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Shawn DREW
> Sent: Thursday, February 14, 2013 11:20 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Odd Migration Behaviour
>
> Do you have a migration delay setting on the disk pool by any chance?
>
>
> Regards,
> Shawn
> 
> Shawn Drew
>
>
> > -Original Message-
> > From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
> > Sent: Thursday, February 14, 2013 5:41 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: [ADSM-L] Odd Migration Behaviour
> >
> > Hi
> >
> > TSM Server v6.3.1
> >
> > Some odd behaviour when migrating a disk pool to tape
> >
> > The disk pool (devicetype=disk) is 6tb in size and has approximately
> 2.5tb of
> > data in it from last nights backups.
> >
> >
> > I backup the stgpool, it copies 2.5tb to tape. Fine with this
> >
> > I mig stgpool lo=0 maxpr=3. Start fine.
> >
> > (I do not use a duration parameter)
> >
> > First migration finishes after a few minutes,  migrating 500mb. Lower
> than i
> > expected, i guess Second migration finishes after 30 minutes, migrating
> > 138gb.
> > Third migration finishes after 1.5 hours, migrating 819gb
> >
> > But the pool now shows about 24% utilised, so still about 1.5tb of data
> > remaining
> >
> >
> > The tape pool the migrations are writing to has colloocation=group
> specified.
> > I have three collocgroups, containg approximately 250 nodes. All of the
> > nodes on the server are within these 3 groups I noticed that one of the
> > clients was still backing up. It's a slow backup, always has been. That
> node is
> > in one of the collocgroups.
> > When that client backup completed, i ran migration again with lo=0 and
> it is
> > now beyond 1tb and still running. Pct utilisation of the disk pool is
> now down
> > to 10%
> >
> >
> > So, because the backup of that node is still running, will that prevent
> > migration from migrating data from that specific collocgroup while a
> backup
> > of a client within that group is in process?
> >
> > Any comments welcome.
> >
> > Regards
>
>
> This message and any attachments (the "message") is intended solely for
> the addressees and is confidential. If you receive this message in error,
> please delete it and immediately notify the sender. Any use not in accord
> with its purpose, any dissemination or disclosure, either whole or partial,
> is prohibited except formal approval. The internet can not guarantee the
> integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
> not therefore be liable for the message if modified. Please note that
> certain
> functions and services for BNP Paribas may be performed by BNP Paribas
> RCC, Inc.
>


Re: Odd Migration Behaviour

2013-02-14 Thread white jeff
Hi

Thanks for the response. There was a backup in progress, but it had only
backed up 7gb. There was about 1.5tb in the pool. The other clients had
finished backing up several hours earlier. Migs ran as expected, but
completed, leaving 1.5tb in the pool

I will find out tomorrow morning when i get back to the client site what
thye %migr and %util values are.

Thanks again

On 14 February 2013 17:55, Prather, Wanda  wrote:

> Do Q STGPOOL and look for %MIGR, "per cent migratable".
>
> If %MIGR is less than %UTIL, then there are chunks in the pool that aren't
> eligible and can't be migrated out yet, because there are transactions in
> progress (usually a backup still running).
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Shawn DREW
> Sent: Thursday, February 14, 2013 11:20 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Odd Migration Behaviour
>
> Do you have a migration delay setting on the disk pool by any chance?
>
>
> Regards,
> Shawn
> 
> Shawn Drew
>
>
> > -Original Message-
> > From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
> > Sent: Thursday, February 14, 2013 5:41 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: [ADSM-L] Odd Migration Behaviour
> >
> > Hi
> >
> > TSM Server v6.3.1
> >
> > Some odd behaviour when migrating a disk pool to tape
> >
> > The disk pool (devicetype=disk) is 6tb in size and has approximately
> 2.5tb of
> > data in it from last nights backups.
> >
> >
> > I backup the stgpool, it copies 2.5tb to tape. Fine with this
> >
> > I mig stgpool lo=0 maxpr=3. Start fine.
> >
> > (I do not use a duration parameter)
> >
> > First migration finishes after a few minutes,  migrating 500mb. Lower
> than i
> > expected, i guess Second migration finishes after 30 minutes, migrating
> > 138gb.
> > Third migration finishes after 1.5 hours, migrating 819gb
> >
> > But the pool now shows about 24% utilised, so still about 1.5tb of data
> > remaining
> >
> >
> > The tape pool the migrations are writing to has colloocation=group
> specified.
> > I have three collocgroups, containg approximately 250 nodes. All of the
> > nodes on the server are within these 3 groups I noticed that one of the
> > clients was still backing up. It's a slow backup, always has been. That
> node is
> > in one of the collocgroups.
> > When that client backup completed, i ran migration again with lo=0 and
> it is
> > now beyond 1tb and still running. Pct utilisation of the disk pool is
> now down
> > to 10%
> >
> >
> > So, because the backup of that node is still running, will that prevent
> > migration from migrating data from that specific collocgroup while a
> backup
> > of a client within that group is in process?
> >
> > Any comments welcome.
> >
> > Regards
>
>
> This message and any attachments (the "message") is intended solely for
> the addressees and is confidential. If you receive this message in error,
> please delete it and immediately notify the sender. Any use not in accord
> with its purpose, any dissemination or disclosure, either whole or partial,
> is prohibited except formal approval. The internet can not guarantee the
> integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
> not therefore be liable for the message if modified. Please note that
> certain
> functions and services for BNP Paribas may be performed by BNP Paribas
> RCC, Inc.
>


Re: Odd Migration Behaviour

2013-02-14 Thread white jeff
Hi. No, migdelay set to 0

Thanks

On 14 February 2013 16:19, Shawn DREW  wrote:

> Do you have a migration delay setting on the disk pool by any chance?
>
>
> Regards,
> Shawn
> 
> Shawn Drew
>
>
> > -Original Message-
> > From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
> > Sent: Thursday, February 14, 2013 5:41 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: [ADSM-L] Odd Migration Behaviour
> >
> > Hi
> >
> > TSM Server v6.3.1
> >
> > Some odd behaviour when migrating a disk pool to tape
> >
> > The disk pool (devicetype=disk) is 6tb in size and has approximately
> 2.5tb of
> > data in it from last nights backups.
> >
> >
> > I backup the stgpool, it copies 2.5tb to tape. Fine with this
> >
> > I mig stgpool lo=0 maxpr=3. Start fine.
> >
> > (I do not use a duration parameter)
> >
> > First migration finishes after a few minutes,  migrating 500mb. Lower
> than i
> > expected, i guess Second migration finishes after 30 minutes, migrating
> > 138gb.
> > Third migration finishes after 1.5 hours, migrating 819gb
> >
> > But the pool now shows about 24% utilised, so still about 1.5tb of data
> > remaining
> >
> >
> > The tape pool the migrations are writing to has colloocation=group
> specified.
> > I have three collocgroups, containg approximately 250 nodes. All of the
> > nodes on the server are within these 3 groups I noticed that one of the
> > clients was still backing up. It's a slow backup, always has been. That
> node is
> > in one of the collocgroups.
> > When that client backup completed, i ran migration again with lo=0 and
> it is
> > now beyond 1tb and still running. Pct utilisation of the disk pool is
> now down
> > to 10%
> >
> >
> > So, because the backup of that node is still running, will that prevent
> > migration from migrating data from that specific collocgroup while a
> backup
> > of a client within that group is in process?
> >
> > Any comments welcome.
> >
> > Regards
>
>
> This message and any attachments (the "message") is intended solely for
> the addressees and is confidential. If you receive this message in error,
> please delete it and immediately notify the sender. Any use not in accord
> with its purpose, any dissemination or disclosure, either whole or partial,
> is prohibited except formal approval. The internet can not guarantee the
> integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
> not therefore be liable for the message if modified. Please note that
> certain
> functions and services for BNP Paribas may be performed by BNP Paribas
> RCC, Inc.
>


Re: Odd Migration Behaviour

2013-02-14 Thread Prather, Wanda
Do Q STGPOOL and look for %MIGR, "per cent migratable".

If %MIGR is less than %UTIL, then there are chunks in the pool that aren't 
eligible and can't be migrated out yet, because there are transactions in 
progress (usually a backup still running).  


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Shawn 
DREW
Sent: Thursday, February 14, 2013 11:20 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Odd Migration Behaviour

Do you have a migration delay setting on the disk pool by any chance?


Regards, 
Shawn

Shawn Drew


> -Original Message-
> From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
> Sent: Thursday, February 14, 2013 5:41 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Odd Migration Behaviour
> 
> Hi
> 
> TSM Server v6.3.1
> 
> Some odd behaviour when migrating a disk pool to tape
> 
> The disk pool (devicetype=disk) is 6tb in size and has approximately 2.5tb of
> data in it from last nights backups.
> 
> 
> I backup the stgpool, it copies 2.5tb to tape. Fine with this
> 
> I mig stgpool lo=0 maxpr=3. Start fine.
> 
> (I do not use a duration parameter)
> 
> First migration finishes after a few minutes,  migrating 500mb. Lower than i
> expected, i guess Second migration finishes after 30 minutes, migrating
> 138gb.
> Third migration finishes after 1.5 hours, migrating 819gb
> 
> But the pool now shows about 24% utilised, so still about 1.5tb of data
> remaining
> 
> 
> The tape pool the migrations are writing to has colloocation=group specified.
> I have three collocgroups, containg approximately 250 nodes. All of the
> nodes on the server are within these 3 groups I noticed that one of the
> clients was still backing up. It's a slow backup, always has been. That node 
> is
> in one of the collocgroups.
> When that client backup completed, i ran migration again with lo=0 and it is
> now beyond 1tb and still running. Pct utilisation of the disk pool is now down
> to 10%
> 
> 
> So, because the backup of that node is still running, will that prevent
> migration from migrating data from that specific collocgroup while a backup
> of a client within that group is in process?
> 
> Any comments welcome.
> 
> Regards


This message and any attachments (the "message") is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Odd Migration Behaviour

2013-02-14 Thread Shawn DREW
Do you have a migration delay setting on the disk pool by any chance?


Regards, 
Shawn

Shawn Drew


> -Original Message-
> From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
> Sent: Thursday, February 14, 2013 5:41 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Odd Migration Behaviour
> 
> Hi
> 
> TSM Server v6.3.1
> 
> Some odd behaviour when migrating a disk pool to tape
> 
> The disk pool (devicetype=disk) is 6tb in size and has approximately 2.5tb of
> data in it from last nights backups.
> 
> 
> I backup the stgpool, it copies 2.5tb to tape. Fine with this
> 
> I mig stgpool lo=0 maxpr=3. Start fine.
> 
> (I do not use a duration parameter)
> 
> First migration finishes after a few minutes,  migrating 500mb. Lower than i
> expected, i guess Second migration finishes after 30 minutes, migrating
> 138gb.
> Third migration finishes after 1.5 hours, migrating 819gb
> 
> But the pool now shows about 24% utilised, so still about 1.5tb of data
> remaining
> 
> 
> The tape pool the migrations are writing to has colloocation=group specified.
> I have three collocgroups, containg approximately 250 nodes. All of the
> nodes on the server are within these 3 groups I noticed that one of the
> clients was still backing up. It's a slow backup, always has been. That node 
> is
> in one of the collocgroups.
> When that client backup completed, i ran migration again with lo=0 and it is
> now beyond 1tb and still running. Pct utilisation of the disk pool is now down
> to 10%
> 
> 
> So, because the backup of that node is still running, will that prevent
> migration from migrating data from that specific collocgroup while a backup
> of a client within that group is in process?
> 
> Any comments welcome.
> 
> Regards


This message and any attachments (the "message") is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Odd Migration Behaviour

2013-02-14 Thread white jeff
Hi

TSM Server v6.3.1

Some odd behaviour when migrating a disk pool to tape

The disk pool (devicetype=disk) is 6tb in size and has approximately
2.5tb of data in it from last nights backups.


I backup the stgpool, it copies 2.5tb to tape. Fine with this

I mig stgpool lo=0 maxpr=3. Start fine.

(I do not use a duration parameter)

First migration finishes after a few minutes,  migrating 500mb. Lower
than i expected, i guess
Second migration finishes after 30 minutes, migrating 138gb.
Third migration finishes after 1.5 hours, migrating 819gb

But the pool now shows about 24% utilised, so still about 1.5tb of
data remaining


The tape pool the migrations are writing to has colloocation=group specified.
I have three collocgroups, containg approximately 250 nodes. All of
the nodes on the server are within these 3 groups
I noticed that one of the clients was still backing up. It's a slow
backup, always has been. That node is in one of the collocgroups.
When that client backup completed, i ran migration again with lo=0 and
it is now beyond 1tb and still running. Pct utilisation of the disk
pool is now down to 10%


So, because the backup of that node is still running, will that
prevent migration from migrating data from that specific collocgroup
while a backup of a client within that group is in process?

Any comments welcome.

Regards


Re: Migration duration default?

2012-04-11 Thread Zoltan Forray/AC/VCU
Nope.  Went directly to the "console" and typed the command.  Activity log
shows the command just as I entered it.


Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



From:   Rick Adamson 
To: ADSM-L@VM.MARIST.EDU
Date:   04/11/2012 12:59 PM
Subject:    Re: [ADSM-L] Migration duration default?
Sent by:"ADSM: Dist Stor Manager" 



Do you use a "maintenance" script?

When configured you have the option to specify the duration value,
perhaps this is where the message is coming from.

Query the script using: q script  f=l

And check the lines regarding migration


~Rick Adamson
Jacksonville, FL.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Wednesday, April 11, 2012 11:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Migration duration default?

My point exactly.  I DID NOT specify a DUR value - I never do.  Help
says
it should have run until lo=value, which I set to 0.



From:   "Ehresman,David E." 
To: ADSM-L@VM.MARIST.EDU
Date:   04/11/2012 11:45 AM
Subject:Re: [ADSM-L] Migration duration default?
Sent by:"ADSM: Dist Stor Manager" 



"help mig stg" on TSM 6.2 on AIX says " DUration
   Specifies the maximum number of minutes the migration will run before
   being automatically cancelled. When the specified number of minutes
   elapses, the server will automatically cancel all migration processes
   for this storage pool. As soon as the processes recognize the
   automatic cancellation, they will end. As a result, the migration may
   run longer than the value you specified for this parameter. You can
   specify a number from 1 to . This parameter is optional. If not
   specified, the server will stop only after the low migration
more...   ( to continue, 'C' to cancel)

   threshold is reached."

Don't know if it is different for linux or 6.1 but try issuing the help
command.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Wednesday, April 11, 2012 9:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Migration duration default?

Linux Server - 6.1.5.100

Is there a default value for manual Migration, that I am not aware of? I
never use the DUR value.  Yesterday around 14:40, I started a manual
migration (mig stg backuppool lo=0).  This morning at 7:26am I see the
message:

ANR4925W Migration process 1727 terminated for storage pool BACKUPPOOL -
duration exceeded.

Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: Migration duration default?

2012-04-11 Thread Rick Adamson
Do you use a "maintenance" script?

When configured you have the option to specify the duration value,
perhaps this is where the message is coming from.

Query the script using: q script  f=l

And check the lines regarding migration


~Rick Adamson
Jacksonville, FL.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Wednesday, April 11, 2012 11:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Migration duration default?

My point exactly.  I DID NOT specify a DUR value - I never do.  Help
says
it should have run until lo=value, which I set to 0.



From:   "Ehresman,David E." 
To: ADSM-L@VM.MARIST.EDU
Date:   04/11/2012 11:45 AM
Subject:    Re: [ADSM-L] Migration duration default?
Sent by:"ADSM: Dist Stor Manager" 



"help mig stg" on TSM 6.2 on AIX says " DUration
   Specifies the maximum number of minutes the migration will run before
   being automatically cancelled. When the specified number of minutes
   elapses, the server will automatically cancel all migration processes
   for this storage pool. As soon as the processes recognize the
   automatic cancellation, they will end. As a result, the migration may
   run longer than the value you specified for this parameter. You can
   specify a number from 1 to . This parameter is optional. If not
   specified, the server will stop only after the low migration
more...   ( to continue, 'C' to cancel)

   threshold is reached."

Don't know if it is different for linux or 6.1 but try issuing the help
command.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Wednesday, April 11, 2012 9:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Migration duration default?

Linux Server - 6.1.5.100

Is there a default value for manual Migration, that I am not aware of? I
never use the DUR value.  Yesterday around 14:40, I started a manual
migration (mig stg backuppool lo=0).  This morning at 7:26am I see the
message:

ANR4925W Migration process 1727 terminated for storage pool BACKUPPOOL -
duration exceeded.

Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: Migration duration default?

2012-04-11 Thread Zoltan Forray/AC/VCU
My point exactly.  I DID NOT specify a DUR value - I never do.  Help says
it should have run until lo=value, which I set to 0.



From:   "Ehresman,David E." 
To: ADSM-L@VM.MARIST.EDU
Date:   04/11/2012 11:45 AM
Subject:Re: [ADSM-L] Migration duration default?
Sent by:"ADSM: Dist Stor Manager" 



"help mig stg" on TSM 6.2 on AIX says " DUration
   Specifies the maximum number of minutes the migration will run before
   being automatically cancelled. When the specified number of minutes
   elapses, the server will automatically cancel all migration processes
   for this storage pool. As soon as the processes recognize the
   automatic cancellation, they will end. As a result, the migration may
   run longer than the value you specified for this parameter. You can
   specify a number from 1 to . This parameter is optional. If not
   specified, the server will stop only after the low migration
more...   ( to continue, 'C' to cancel)

   threshold is reached."

Don't know if it is different for linux or 6.1 but try issuing the help
command.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Wednesday, April 11, 2012 9:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Migration duration default?

Linux Server - 6.1.5.100

Is there a default value for manual Migration, that I am not aware of? I
never use the DUR value.  Yesterday around 14:40, I started a manual
migration (mig stg backuppool lo=0).  This morning at 7:26am I see the
message:

ANR4925W Migration process 1727 terminated for storage pool BACKUPPOOL -
duration exceeded.

Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: Migration duration default?

2012-04-11 Thread Ehresman,David E.
"help mig stg" on TSM 6.2 on AIX says " DUration
   Specifies the maximum number of minutes the migration will run before
   being automatically cancelled. When the specified number of minutes
   elapses, the server will automatically cancel all migration processes
   for this storage pool. As soon as the processes recognize the
   automatic cancellation, they will end. As a result, the migration may
   run longer than the value you specified for this parameter. You can
   specify a number from 1 to . This parameter is optional. If not
   specified, the server will stop only after the low migration
more...   ( to continue, 'C' to cancel)

   threshold is reached."

Don't know if it is different for linux or 6.1 but try issuing the help command.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Wednesday, April 11, 2012 9:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Migration duration default?

Linux Server - 6.1.5.100

Is there a default value for manual Migration, that I am not aware of? I
never use the DUR value.  Yesterday around 14:40, I started a manual
migration (mig stg backuppool lo=0).  This morning at 7:26am I see the
message:

ANR4925W Migration process 1727 terminated for storage pool BACKUPPOOL -
duration exceeded.

Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Migration duration default?

2012-04-11 Thread Zoltan Forray/AC/VCU
Linux Server - 6.1.5.100

Is there a default value for manual Migration, that I am not aware of? I
never use the DUR value.  Yesterday around 14:40, I started a manual
migration (mig stg backuppool lo=0).  This morning at 7:26am I see the
message:

ANR4925W Migration process 1727 terminated for storage pool BACKUPPOOL -
duration exceeded.

Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: migration threads for random access pool and backups issue

2012-03-15 Thread Daniel Sparrman
Well,

a) Large servers with small filesize should always go to a random diskpool. 
Sending them to tape (or a sequential filepool) will most certainly reduce 
performance. So NO, keep sending those small files to a random disk pool. If 
it's not big enough, increase the size, dont try sending the data somewhere 
else.

b) For ALL storage pools, there will be 1 migration process per node. It doesnt 
matter if it's random, sequential or a CD. It's always gonna be 1 process per 
node.

c) Migration processes is usually not the issue. The issue is somewhere else. 
So dont get blind on the 1 migration process per node. If you're having 
performance issues, it's most likely not the migration process.

d) If you get I/O wait, the disks containing your disk pools is most likely not 
properly configured. Remember that the basic idea of performance for a random 
disk pool is to have enough spindles (as in, having enough harddrives in 
whatever disksystem you're using). The only way to long-term increase the 
performance is to increase the amount of spindles. The disksystem's memory 
cache is always helpful, but when it gets filled (and it will) it will need to 
empty to physical spindles. If your disks cant handle that load, you have a 
bottleneck.

e) When you increase your diskpools to 10TB, make sure that the increase is 
done over new spindles (harddrives) and not the same you're already using. 
Expanding the array/lun across the already used spindles wont increase your 
performance.

Regards

Daniel

Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-"ADSM: Dist Stor Manager"  skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: amit jain 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/15/2012 17:46
Ärende: Re: migration threads for random access pool and backups issue


Thanks to all for these valuable inputs. Appreciate it a lot.

Well this is first time this data is getting backed up.

For now here is what I was not aware and what I have done: 1. On Random
Access pools multiple migration sessions can be generated, only if we
backup on multiple nodes ? Is my understanding correct or there is any way
to increase the number of tape mounts ?

Now i know:  Migration processes depends on nodes. random access storage
has limitation in regards to migration process.  If I had  more than 2
nodes backing up to the same random access storage pools then I could have
more than two migration processes depending on the configuration settings.
If the disk pool gets filled up, data goes to next pool and backups wont
fail.

As in our environment we have  large small number of files so file type
disk pool is not a good idea.  Improving backend speed does not always work
better.  Because the speed coming into TSM server will not be fast enough
or sometimes equal or little bit better compared to the speed dumping data
from disk to tape. This all depends on the type of data to back up. If
there are huge number of files, One migration process is good enough and
have 2 or 3 more additional tape drives allocated to direct backup when
disk pool is overflow. That was much much faster than using file type
devclass with multiple migration processes.

Currently able to backup ~4TB a day. I will be increasing the stg pool size
to 10 TB and hope i will get better performance. These is also bottleneck
from the TSM client side. Seeing IO wait from the client side.


Thanks

Amit




On Sat, Mar 10, 2012 at 9:08 PM, amit jain  wrote:

> Hi,
>
> I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I
> have 3 filespaces, backing up on single node. I am triggering multiple
> dsmc,  dividing the filespace  on directories. I have 15 E06 Tape drives
> and can allocate 5 drives for this backup.
>
> If I run multiple dsmc sessions the, server starts only one migration
> process and one tape mount.
> As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
> Migration from random-access pools can use multiple processes."
>
> My Question:
> 1. On Random Access pools multiple migration sessions can be generated,
> only if we backup on multiple nodes ? Is my understanding correct or there
> is any way to increase the number of tape mounts ?
>
> 2. The only way to speed up with current resources is to backup to File
> device class, so that I can have multiple tape mounts?
>
> 3. Any inputs to speed up this backup?
>
> Server and client both are on Linux, running TSM version 6.2.2
>
> Any suggestions are welcome.
>
> Thanks
> Amit
>

Re: migration threads for random access pool and backups issue

2012-03-15 Thread amit jain
Thanks to all for these valuable inputs. Appreciate it a lot.

Well this is first time this data is getting backed up.

For now here is what I was not aware and what I have done: 1. On Random
Access pools multiple migration sessions can be generated, only if we
backup on multiple nodes ? Is my understanding correct or there is any way
to increase the number of tape mounts ?

Now i know:  Migration processes depends on nodes. random access storage
has limitation in regards to migration process.  If I had  more than 2
nodes backing up to the same random access storage pools then I could have
more than two migration processes depending on the configuration settings.
If the disk pool gets filled up, data goes to next pool and backups wont
fail.

As in our environment we have  large small number of files so file type
disk pool is not a good idea.  Improving backend speed does not always work
better.  Because the speed coming into TSM server will not be fast enough
or sometimes equal or little bit better compared to the speed dumping data
from disk to tape. This all depends on the type of data to back up. If
there are huge number of files, One migration process is good enough and
have 2 or 3 more additional tape drives allocated to direct backup when
disk pool is overflow. That was much much faster than using file type
devclass with multiple migration processes.

Currently able to backup ~4TB a day. I will be increasing the stg pool size
to 10 TB and hope i will get better performance. These is also bottleneck
from the TSM client side. Seeing IO wait from the client side.


Thanks

Amit




On Sat, Mar 10, 2012 at 9:08 PM, amit jain  wrote:

> Hi,
>
> I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I
> have 3 filespaces, backing up on single node. I am triggering multiple
> dsmc,  dividing the filespace  on directories. I have 15 E06 Tape drives
> and can allocate 5 drives for this backup.
>
> If I run multiple dsmc sessions the, server starts only one migration
> process and one tape mount.
> As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
> Migration from random-access pools can use multiple processes."
>
> My Question:
> 1. On Random Access pools multiple migration sessions can be generated,
> only if we backup on multiple nodes ? Is my understanding correct or there
> is any way to increase the number of tape mounts ?
>
> 2. The only way to speed up with current resources is to backup to File
> device class, so that I can have multiple tape mounts?
>
> 3. Any inputs to speed up this backup?
>
> Server and client both are on Linux, running TSM version 6.2.2
>
> Any suggestions are welcome.
>
> Thanks
> Amit
>


Re: SV: migration threads for random access pool and backups issue

2012-03-15 Thread Daniel Sparrman
That's also correct. The hugely negative impact from running migration while 
performing backups is due to the heavy load migration puts on the database 
(aswell as stealing valuable I/O from your disks which should be used for 
backups).

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-"ADSM: Dist Stor Manager"  skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Rick Adamson 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/15/2012 16:57
Ärende: Re: SV: migration threads for random access pool and backups issue


Also, using a random access disk pool that is undersized will result in 
migration potentially kicking off while the backup/archive process is still 
running. It has been my experience, and I have often read, this situation has 
an enormous negative impact to the TSM server performance.


~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Thursday, March 15, 2012 4:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: migration threads for random access pool and backups issue

Hi 

A migration process will always only use 1 process per node, independent on the 
storage media. So if you migrate disk > tape, it's 1 process per node, and if 
you do tape > tape it's only 1 process per node. 

As for the original poster, you claim that you need to backup 300TB of data. 
I'm guessing this is your entire environment and not a single server. However, 
you describe the process of backing up a fileserver. How large is this server? 

Generally speaking, I'd say that a 500GB diskpool is way to small to handle a 
300TB environment. Originally, you're supposed to size your diskpool to hold 1 
day of incremental backups. Since change is usually around 5-10%, that would be 
15-30TB of disk to hold 1 day. 

However, I recommend sending your database/mail/application backups (actually, 
all large chunks of data) straight to tape since there is no performance 
benefit in sending it to a random pool (as long as you can stream the data 
continously, the tape drive performance should be good enough). 

Sending all of these large chunks straight to tape should reduce the amount of 
disk you need to hold 1 day of incremental backups from your fileservers. 500GB 
is still gonna be to small though, I would expect a couple of TB's of disk to 
handle the 1 day incremental.

Other options to reduce the disk storage needed on your TSM server is to 
introduce client-side deduplication or compression. That will, except for 
reducing the amount of actual TB stored in your disk storage, also reduce the 
amount of data you send over your network.

Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-"ADSM: Dist Stor Manager"  skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Christian Svensson 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/15/2012 09:40
Ärende: SV: migration threads for random access pool and backups issue


Hi Amit,
After some investigation I found that TSM can only use one Process (1 drive) 
per node or one process per collocation group.
This is only when you are using Random disk. If you want to use multiple drives 
you need to change from random disk to sequeneal disk.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Christian Svensson
Skickat: den 14 mars 2012 07:07
Till: ADSM: Dist Stor Manager
Ämne: SV: migration threads for random access pool and backups issue

Hi Wanda,
Glad to see you at Pulse, but I got a similar problem here in Sweden where the 
migration only use one process during migration.
I was thinking of to open a PMR to understand why it only use one drive.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Prather, Wanda [wprat...@icfi.com]
Skickat: den 13 mars 2012 21:36
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: migration threads for random access pool and backups issue

Since your disk pool is much smaller than what you are backing up, plus you 
have plenty of tape drives, it doesn't make much sense to send those 3 large 
filespaces to the diskpool.

Create a new management class, send each of those 3 filespaces directly to the 
tape drives, bypassing the disk pool.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of amit 
jain
Sent: Sunday, March 11, 2012 12:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] m

Re: SV: migration threads for random access pool and backups issue

2012-03-15 Thread Rick Adamson
Also, using a random access disk pool that is undersized will result in 
migration potentially kicking off while the backup/archive process is still 
running. It has been my experience, and I have often read, this situation has 
an enormous negative impact to the TSM server performance.


~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Thursday, March 15, 2012 4:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: migration threads for random access pool and backups issue

Hi 

A migration process will always only use 1 process per node, independent on the 
storage media. So if you migrate disk > tape, it's 1 process per node, and if 
you do tape > tape it's only 1 process per node. 

As for the original poster, you claim that you need to backup 300TB of data. 
I'm guessing this is your entire environment and not a single server. However, 
you describe the process of backing up a fileserver. How large is this server? 

Generally speaking, I'd say that a 500GB diskpool is way to small to handle a 
300TB environment. Originally, you're supposed to size your diskpool to hold 1 
day of incremental backups. Since change is usually around 5-10%, that would be 
15-30TB of disk to hold 1 day. 

However, I recommend sending your database/mail/application backups (actually, 
all large chunks of data) straight to tape since there is no performance 
benefit in sending it to a random pool (as long as you can stream the data 
continously, the tape drive performance should be good enough). 

Sending all of these large chunks straight to tape should reduce the amount of 
disk you need to hold 1 day of incremental backups from your fileservers. 500GB 
is still gonna be to small though, I would expect a couple of TB's of disk to 
handle the 1 day incremental.

Other options to reduce the disk storage needed on your TSM server is to 
introduce client-side deduplication or compression. That will, except for 
reducing the amount of actual TB stored in your disk storage, also reduce the 
amount of data you send over your network.

Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-"ADSM: Dist Stor Manager"  skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Christian Svensson 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/15/2012 09:40
Ärende: SV: migration threads for random access pool and backups issue


Hi Amit,
After some investigation I found that TSM can only use one Process (1 drive) 
per node or one process per collocation group.
This is only when you are using Random disk. If you want to use multiple drives 
you need to change from random disk to sequeneal disk.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Christian Svensson
Skickat: den 14 mars 2012 07:07
Till: ADSM: Dist Stor Manager
Ämne: SV: migration threads for random access pool and backups issue

Hi Wanda,
Glad to see you at Pulse, but I got a similar problem here in Sweden where the 
migration only use one process during migration.
I was thinking of to open a PMR to understand why it only use one drive.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Prather, Wanda [wprat...@icfi.com]
Skickat: den 13 mars 2012 21:36
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: migration threads for random access pool and backups issue

Since your disk pool is much smaller than what you are backing up, plus you 
have plenty of tape drives, it doesn't make much sense to send those 3 large 
filespaces to the diskpool.

Create a new management class, send each of those 3 filespaces directly to the 
tape drives, bypassing the disk pool.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of amit 
jain
Sent: Sunday, March 11, 2012 12:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migration threads for random access pool and backups issue

Hi,

I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I have 
3 filespaces, backing up on single node. I am triggering multiple dsmc,  
dividing the filespace  on directories. I have 15 E06 Tape drives and can 
allocate 5 drives for this backup.

If I run multiple dsmc sessions the, server starts only one migration process 
and one tape mount.
As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
Migration from random-access pools can use multiple processes."

My Question:
1. On Random Access pools multiple migration sessions can be generated, only if 
we

Migration of Windows TSM server from 6.1/32bit to 6.2/64bit

2012-03-15 Thread TSM
Hello,

is there a description or experience, how to migrate a TSM server
from TSM 6.1 Windows 2008 32bit
to TSM 6.2 Windows 2008 R2 64bit ?

What are the steps?
Is it less risk to use separat servers?

Thanks in advance
Andreas.


SV: migration threads for random access pool and backups issue

2012-03-15 Thread Daniel Sparrman
Hi 

A migration process will always only use 1 process per node, independent on the 
storage media. So if you migrate disk > tape, it's 1 process per node, and if 
you do tape > tape it's only 1 process per node. 

As for the original poster, you claim that you need to backup 300TB of data. 
I'm guessing this is your entire environment and not a single server. However, 
you describe the process of backing up a fileserver. How large is this server? 

Generally speaking, I'd say that a 500GB diskpool is way to small to handle a 
300TB environment. Originally, you're supposed to size your diskpool to hold 1 
day of incremental backups. Since change is usually around 5-10%, that would be 
15-30TB of disk to hold 1 day. 

However, I recommend sending your database/mail/application backups (actually, 
all large chunks of data) straight to tape since there is no performance 
benefit in sending it to a random pool (as long as you can stream the data 
continously, the tape drive performance should be good enough). 

Sending all of these large chunks straight to tape should reduce the amount of 
disk you need to hold 1 day of incremental backups from your fileservers. 500GB 
is still gonna be to small though, I would expect a couple of TB's of disk to 
handle the 1 day incremental.

Other options to reduce the disk storage needed on your TSM server is to 
introduce client-side deduplication or compression. That will, except for 
reducing the amount of actual TB stored in your disk storage, also reduce the 
amount of data you send over your network.

Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-"ADSM: Dist Stor Manager"  skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Christian Svensson 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/15/2012 09:40
Ärende: SV: migration threads for random access pool and backups issue


Hi Amit,
After some investigation I found that TSM can only use one Process (1 drive) 
per node or one process per collocation group.
This is only when you are using Random disk. If you want to use multiple drives 
you need to change from random disk to sequeneal disk.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Christian Svensson
Skickat: den 14 mars 2012 07:07
Till: ADSM: Dist Stor Manager
Ämne: SV: migration threads for random access pool and backups issue

Hi Wanda,
Glad to see you at Pulse, but I got a similar problem here in Sweden where the 
migration only use one process during migration.
I was thinking of to open a PMR to understand why it only use one drive.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Prather, Wanda [wprat...@icfi.com]
Skickat: den 13 mars 2012 21:36
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: migration threads for random access pool and backups issue

Since your disk pool is much smaller than what you are backing up, plus you 
have plenty of tape drives, it doesn't make much sense to send those 3 large 
filespaces to the diskpool.

Create a new management class, send each of those 3 filespaces directly to the 
tape drives, bypassing the disk pool.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of amit 
jain
Sent: Sunday, March 11, 2012 12:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migration threads for random access pool and backups issue

Hi,

I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I have 
3 filespaces, backing up on single node. I am triggering multiple dsmc,  
dividing the filespace  on directories. I have 15 E06 Tape drives and can 
allocate 5 drives for this backup.

If I run multiple dsmc sessions the, server starts only one migration process 
and one tape mount.
As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
Migration from random-access pools can use multiple processes."

My Question:
1. On Random Access pools multiple migration sessions can be generated, only if 
we backup on multiple nodes ? Is my understanding correct or there is any way 
to increase the number of tape mounts ?

2. The only way to speed up with current resources is to backup to File device 
class, so that I can have multiple tape mounts?

3. Any inputs to speed up this backup?

Server and client both are on Linux, running TSM version 6.2.2

Any suggestions are welcome.

Thanks
Amit

SV: migration threads for random access pool and backups issue

2012-03-15 Thread Christian Svensson
Hi Amit,
After some investigation I found that TSM can only use one Process (1 drive) 
per node or one process per collocation group.
This is only when you are using Random disk. If you want to use multiple drives 
you need to change from random disk to sequeneal disk.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Christian Svensson
Skickat: den 14 mars 2012 07:07
Till: ADSM: Dist Stor Manager
Ämne: SV: migration threads for random access pool and backups issue

Hi Wanda,
Glad to see you at Pulse, but I got a similar problem here in Sweden where the 
migration only use one process during migration.
I was thinking of to open a PMR to understand why it only use one drive.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Prather, Wanda [wprat...@icfi.com]
Skickat: den 13 mars 2012 21:36
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: migration threads for random access pool and backups issue

Since your disk pool is much smaller than what you are backing up, plus you 
have plenty of tape drives, it doesn't make much sense to send those 3 large 
filespaces to the diskpool.

Create a new management class, send each of those 3 filespaces directly to the 
tape drives, bypassing the disk pool.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of amit 
jain
Sent: Sunday, March 11, 2012 12:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migration threads for random access pool and backups issue

Hi,

I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I have 
3 filespaces, backing up on single node. I am triggering multiple dsmc,  
dividing the filespace  on directories. I have 15 E06 Tape drives and can 
allocate 5 drives for this backup.

If I run multiple dsmc sessions the, server starts only one migration process 
and one tape mount.
As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
Migration from random-access pools can use multiple processes."

My Question:
1. On Random Access pools multiple migration sessions can be generated, only if 
we backup on multiple nodes ? Is my understanding correct or there is any way 
to increase the number of tape mounts ?

2. The only way to speed up with current resources is to backup to File device 
class, so that I can have multiple tape mounts?

3. Any inputs to speed up this backup?

Server and client both are on Linux, running TSM version 6.2.2

Any suggestions are welcome.

Thanks
Amit


SV: migration threads for random access pool and backups issue

2012-03-13 Thread Christian Svensson
Hi Wanda,
Glad to see you at Pulse, but I got a similar problem here in Sweden where the 
migration only use one process during migration.
I was thinking of to open a PMR to understand why it only use one drive.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Prather, Wanda [wprat...@icfi.com]
Skickat: den 13 mars 2012 21:36
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: migration threads for random access pool and backups issue

Since your disk pool is much smaller than what you are backing up, plus you 
have plenty of tape drives, it doesn't make much sense to send those 3 large 
filespaces to the diskpool.

Create a new management class, send each of those 3 filespaces directly to the 
tape drives, bypassing the disk pool.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of amit 
jain
Sent: Sunday, March 11, 2012 12:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migration threads for random access pool and backups issue

Hi,

I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I have 
3 filespaces, backing up on single node. I am triggering multiple dsmc,  
dividing the filespace  on directories. I have 15 E06 Tape drives and can 
allocate 5 drives for this backup.

If I run multiple dsmc sessions the, server starts only one migration process 
and one tape mount.
As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
Migration from random-access pools can use multiple processes."

My Question:
1. On Random Access pools multiple migration sessions can be generated, only if 
we backup on multiple nodes ? Is my understanding correct or there is any way 
to increase the number of tape mounts ?

2. The only way to speed up with current resources is to backup to File device 
class, so that I can have multiple tape mounts?

3. Any inputs to speed up this backup?

Server and client both are on Linux, running TSM version 6.2.2

Any suggestions are welcome.

Thanks
Amit


Re: migration threads for random access pool and backups issue

2012-03-13 Thread Prather, Wanda
Since your disk pool is much smaller than what you are backing up, plus you 
have plenty of tape drives, it doesn't make much sense to send those 3 large 
filespaces to the diskpool.  

Create a new management class, send each of those 3 filespaces directly to the 
tape drives, bypassing the disk pool.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of amit 
jain
Sent: Sunday, March 11, 2012 12:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migration threads for random access pool and backups issue

Hi,

I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I have 
3 filespaces, backing up on single node. I am triggering multiple dsmc,  
dividing the filespace  on directories. I have 15 E06 Tape drives and can 
allocate 5 drives for this backup.

If I run multiple dsmc sessions the, server starts only one migration process 
and one tape mount.
As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
Migration from random-access pools can use multiple processes."

My Question:
1. On Random Access pools multiple migration sessions can be generated, only if 
we backup on multiple nodes ? Is my understanding correct or there is any way 
to increase the number of tape mounts ?

2. The only way to speed up with current resources is to backup to File device 
class, so that I can have multiple tape mounts?

3. Any inputs to speed up this backup?

Server and client both are on Linux, running TSM version 6.2.2

Any suggestions are welcome.

Thanks
Amit


Re: Controling FILLING tapes at end of Migration

2012-03-12 Thread Steven Langdale
I'm assuming the number of filling tapes isn't increasing, it's higher than
you want, but stable. yes?

As i'm sure you are aware, this is "normal" behavior.  I've also been there
and tried to fight it.  In that instance I ran a job after the migration
that did a move data on the smallest volumes.

You're fighting against something you're not going to win though,  they
will keep coming back.

If you need collocation groups and you are in this situation, the best
approach is to procure extra library capacity.  Easier said than done I
know, but a MUCH easier approach.

Steven

On 12 March 2012 23:32, Roger Deschner  wrote:

> I'm having a problem with FILLING tapes multiplying out of control,
> which have very little data on them.
>
> It appears that this happens at the end of migration, when that one last
> collocation group is being migrated, and all others have finished. TSM
> sees empty tape drives, and less than the defined number of migration
> processes running, and it decides it can use the drives, so it mounts
> fresh scratch tapes to fill up all the drives. This only happens when
> the remaining data to be migrated belongs to more than one node - but
> that's still fairly often. The result is a large number of FILLING tapes
> that contain almost no data. A rough formula for these almost-empty
> wasted filling tapes is:
>
>  (number of migration processes - 1) * number of collocation groups
>
> Is there a way, short of combining collocation groups, to deal with this
> problem? We've got a very full tape library, and I'm looking for any
> obvious ways to get more data onto the same number of tapes. Ideally,
> I'd like there to be only one FILLING tape per collocation group.
>
> Roger Deschner  University of Illinois at Chicago rog...@uic.edu
>   Academic Computing & Communications Center ==I have
> not lost my mind -- it is backed up on tape somewhere.=
>


Re: Controling FILLING tapes at end of Migration

2012-03-12 Thread Grant Street

Is there a way, short of combining collocation groups, to deal with this
problem?

some options
1 Reduce the number of migration processes
2 Increase the time between migrations eg rather than daily do weekly.
3 do reclamation after migration.

AFAIK the combination of migration processes and collocation groups is
what gives you this behavior. if these two are not negotiable then I'd
recommend option 3 above.

someone else might have a trickier way.

Grant


Controling FILLING tapes at end of Migration

2012-03-12 Thread Roger Deschner
I'm having a problem with FILLING tapes multiplying out of control,
which have very little data on them.

It appears that this happens at the end of migration, when that one last
collocation group is being migrated, and all others have finished. TSM
sees empty tape drives, and less than the defined number of migration
processes running, and it decides it can use the drives, so it mounts
fresh scratch tapes to fill up all the drives. This only happens when
the remaining data to be migrated belongs to more than one node - but
that's still fairly often. The result is a large number of FILLING tapes
that contain almost no data. A rough formula for these almost-empty
wasted filling tapes is:

 (number of migration processes - 1) * number of collocation groups

Is there a way, short of combining collocation groups, to deal with this
problem? We've got a very full tape library, and I'm looking for any
obvious ways to get more data onto the same number of tapes. Ideally,
I'd like there to be only one FILLING tape per collocation group.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing & Communications Center ==I have
not lost my mind -- it is backed up on tape somewhere.=


migration threads for random access pool and backups issue

2012-03-10 Thread amit jain
Hi,

I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I
have 3 filespaces, backing up on single node. I am triggering multiple
dsmc,  dividing the filespace  on directories. I have 15 E06 Tape drives
and can allocate 5 drives for this backup.

If I run multiple dsmc sessions the, server starts only one migration
process and one tape mount.
As per ADMIN GUIDE the Migration for Random Access is "Performed by node.
Migration from random-access pools can use multiple processes."

My Question:
1. On Random Access pools multiple migration sessions can be generated,
only if we backup on multiple nodes ? Is my understanding correct or there
is any way to increase the number of tape mounts ?

2. The only way to speed up with current resources is to backup to File
device class, so that I can have multiple tape mounts?

3. Any inputs to speed up this backup?

Server and client both are on Linux, running TSM version 6.2.2

Any suggestions are welcome.

Thanks
Amit


Re: TSM Migration Related

2011-12-06 Thread Roger Deschner
This has been the migration strategy we adopted, and are right now in
the middle of. New server machine, new TSM version, the client nodes
make fresh, new backups spread over a period of time. The old system
gradually expires away until you can get rid of it.

Run the new server on 6.2.3, instead of 6.3.0, for better supported
working with existing V5 servers. V6.3 with V5.5 is not supported.
However V6.2.3 does support working with V5.5, and it will be supported
for a while. You could do this migration first to V6.2.3, and then
upgrade to the very latest version later easily.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
"You know you're a geek when... You try to shoo a fly away from the
monitor with your cursor. That just happened to me. It was scary."

On Tue, 6 Dec 2011, Stefan Folkerts wrote:

>
>But there also might be another route, the let most of the data die on the
>old system, migratie via server-to-server what needs to be saved and start
>over, that way you don't have that much work on the old evironment (you do
>need to upgrade TSM) and it's not such a big-bang migration.
>However you do this, it's going to take a some solid preparation and
>planning.


Re: TSM Migration Related

2011-12-06 Thread Stefan Folkerts
I agree with Xav, this is something you want to do with a partner that has
done this kind of stuff before.
There are all sorts of things that need to be sorted, if you really need to
migrate the TSM db you need to upgrade to 5.4 or 5.5, then you have the
issue of LTO-5 not being able to read LTO-1 tapes so you need to (for
example) connect both libraries, make new storagepools etc etc.

But there also might be another route, the let most of the data die on the
old system, migratie via server-to-server what needs to be saved and start
over, that way you don't have that much work on the old evironment (you do
need to upgrade TSM) and it's not such a big-bang migration.
However you do this, it's going to take a some solid preparation and
planning.

Make sure you get somebody that has good TSM 5, TSM 6 and LTO tape skills.


On Sat, Dec 3, 2011 at 7:50 AM, venkatb  wrote:

> Hi,
>
> I have a TSM 5.2 running on AIX, IBM TS3500 library With 6 LTO-1 drives
> connected to it. the library capacity is 253 tapes.
>
> Now we want to move to Latest TSM Version With Latest Library with LTO5
> Drives
>
> Once the new setup is up and running, i want to migrate the DB to latest
> version, and also move all my data from LTO1 Cartridges to LTO5 Cat rides.
>
> so that i can crash all my LTO1 Tapes (Around 1500),
>
>
> 1) Can some pls suggest how i can do this ?
>
> 2) How to know the size of total data in tapes since day 1 ?
>
> 3)is these steps are correct ?
>first i will define a disk pool,
>then move data from the LTO1 to diskpool
>then install new version, migrate DB,
>the move data from diskpool to new LTO5 pool
>
> 4)Before Migrating i want to delete some data from the tapes ex: all data
> of node1 before 2009 is this possible ?
>
> 5) Usually when stgpool out of space we cheeck out 100% full tapes from
> lib and check in free tape and assign to stgpool, since last5years
> almost 1500 full tapes are in boxes, when some restoration comes, TSM will
> ask for check in the required volumes, and we check in, once finishes we
> removes, during the migration check-in check-out task will become
> difficult, How can i shorten this procedure
>
> 6)usually LTO5 media box shows that 3TB (2:1) compression, whether TSM
> uses this compression ratio ?
>
>
>
> Thankyou
>
> Venkatesh B
>
> +--
> |This was sent by venkatsmail...@gmail.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>


Re: TSM Migration Related

2011-12-03 Thread Xav Paice
On Fri, 2011-12-02 at 22:50 -0800, venkatb wrote:
> Hi,
>
> I have a TSM 5.2 running on AIX, IBM TS3500 library With 6 LTO-1 drives 
> connected to it. the library capacity is 253 tapes.
>
> Now we want to move to Latest TSM Version With Latest Library with LTO5 Drives
>
> Once the new setup is up and running, i want to migrate the DB to latest 
> version, and also move all my data from LTO1 Cartridges to LTO5 Cat rides.
>
> so that i can crash all my LTO1 Tapes (Around 1500),
>
>
> 1) Can some pls suggest how i can do this ?
>
> 2) How to know the size of total data in tapes since day 1 ?
>
> 3)is these steps are correct ?
>   first i will define a disk pool,
>   then move data from the LTO1 to diskpool
>   then install new version, migrate DB,
>   the move data from diskpool to new LTO5 pool
>
> 4)Before Migrating i want to delete some data from the tapes ex: all data of 
> node1 before 2009 is this possible ?
>
> 5) Usually when stgpool out of space we cheeck out 100% full tapes from lib 
> and check in free tape and assign to stgpool, since last5years almost 
> 1500 full tapes are in boxes, when some restoration comes, TSM will ask for 
> check in the required volumes, and we check in, once finishes we removes, 
> during the migration check-in check-out task will become difficult, How can i 
> shorten this procedure
>
> 6)usually LTO5 media box shows that 3TB (2:1) compression, whether TSM uses 
> this compression ratio ?
>
>

It sounds like you have a fairly lengthy list of items to address there,
not least being on such an old version of TSM but also how you are
managing your storage.  In my opinion, you need someone to come and
visit you and assess your environment then come up with a plan of action
that will get you to where you need to be.  A local business partner
that knows what they are doing will be able to do that and give you a
good idea (and price) for the hardware you actually need rather than
some sales pitch.

You might be able to upgrade the existing 3584, adding new drives and
partitioning, depending on the support status.  I suspect you will need
at least a new server for TSM, and might want to select what you're
doing about platform choice etc while thinking about how to migrate
across.


TSM Migration Related

2011-12-03 Thread venkatb
Hi,

I have a TSM 5.2 running on AIX, IBM TS3500 library With 6 LTO-1 drives 
connected to it. the library capacity is 253 tapes.

Now we want to move to Latest TSM Version With Latest Library with LTO5 Drives

Once the new setup is up and running, i want to migrate the DB to latest 
version, and also move all my data from LTO1 Cartridges to LTO5 Cat rides.

so that i can crash all my LTO1 Tapes (Around 1500),


1) Can some pls suggest how i can do this ?

2) How to know the size of total data in tapes since day 1 ?

3)is these steps are correct ?
first i will define a disk pool,
then move data from the LTO1 to diskpool
then install new version, migrate DB,
the move data from diskpool to new LTO5 pool

4)Before Migrating i want to delete some data from the tapes ex: all data of 
node1 before 2009 is this possible ?

5) Usually when stgpool out of space we cheeck out 100% full tapes from lib and 
check in free tape and assign to stgpool, since last5years almost 1500 full 
tapes are in boxes, when some restoration comes, TSM will ask for check in the 
required volumes, and we check in, once finishes we removes, during the 
migration check-in check-out task will become difficult, How can i shorten this 
procedure

6)usually LTO5 media box shows that 3TB (2:1) compression, whether TSM uses 
this compression ratio ?



Thankyou

Venkatesh B

+--
|This was sent by venkatsmail...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Ang: Re: Ang: Re: Tsm v6 migration question

2011-03-25 Thread Daniel Sparrman
Hi Gregory


My comment was to this previous comment:

"But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie"

What I ment was that it has never been recommended to share the same array / 
spindle / filesystem between logs and database.

If you go back to the TSM v5.5 performance guide, you'll will notice the 
recommendation to separate log and DB on separate spindles / filesystems / 
arrays.

As for DB2, yes, DB2 is new for v6, but the performance setup isnt.

My guess is that if your setup crashed after 2 months, whoever implemented your 
solution probably shared filesystemspace between the database and some other 
part of TSM, such as archivelog or log. Since a) the database can now grow "of 
it's own" I wouldnt place it with something else since you might run out of 
space b) I wouldnt share archivelog space with something else either since it 
also "grows of it's own".

I cant imagine that your TSM implementation crashed just because the person who 
implemented it placed the database on a single filesystem. This is technically 
possible, but not something I would recommend due to performance issues.

Best Regards

Daniel Sparrman

-"ADSM: Dist Stor Manager"  skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/25/2011 14:47
Ärende: Re: Ang: Re: Tsm v6 migration question

Hello Daniel,

You say : ' That's not really something new to TSM v6'.

DB2 is now the DBMS for TSM, and if DB2 improve database, DB2 has system 
requirement who was not the same in previous version

My experience:
The installer has set up the solution on our site has not implemented the 
recommendations related to DB2.
TSM crash after two months.


Best Regards
Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Daniel 
Sparrman
Envoyé : jeudi 24 mars 2011 10:17
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Ang: Re: Tsm v6 migration question

That's not really something new to TSM v6. For performance reasons, it has 
always been recommended to separate database and log on different spindles. 

As with v6, at least for larger implementations, the same rule of thumb 
applies: Try separating your DB paths on at least 4 different arrays/spindles, 
preferably 8 or more. Too many will not increase performance, rather reduce it. 
Put your active log / log mirror on spindles of it's own, as well as your 
archivelog / failover paths.

Watch out for sharing archivelog with something else (like storage) since the 
database backup trigger will look at the % used on the filesystem. So putting 
storagepools and then your archivelog in the same filesystem probably isnt the 
best of ideas.

Best Regards

Daniel Sparrman
-"ADSM: Dist Stor Manager"  skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/24/2011 11:06
Ärende: Re: Tsm v6 migration question

Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'atte

Re: Ang: Re: Tsm v6 migration question

2011-03-25 Thread molin gregory
Hello Daniel,

You say : ' That's not really something new to TSM v6'.

DB2 is now the DBMS for TSM, and if DB2 improve database, DB2 has system 
requirement who was not the same in previous version

My experience:
The installer has set up the solution on our site has not implemented the 
recommendations related to DB2.
TSM crash after two months.


Best Regards
Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Daniel 
Sparrman
Envoyé : jeudi 24 mars 2011 10:17
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Ang: Re: Tsm v6 migration question

That's not really something new to TSM v6. For performance reasons, it has 
always been recommended to separate database and log on different spindles. 

As with v6, at least for larger implementations, the same rule of thumb 
applies: Try separating your DB paths on at least 4 different arrays/spindles, 
preferably 8 or more. Too many will not increase performance, rather reduce it. 
Put your active log / log mirror on spindles of it's own, as well as your 
archivelog / failover paths.

Watch out for sharing archivelog with something else (like storage) since the 
database backup trigger will look at the % used on the filesystem. So putting 
storagepools and then your archivelog in the same filesystem probably isnt the 
best of ideas.

Best Regards

Daniel Sparrman
-"ADSM: Dist Stor Manager"  skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/24/2011 11:06
Ärende: Re: Tsm v6 migration question

Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "


Ang: Re: Tsm v6 migration question

2011-03-24 Thread Daniel Sparrman
That's not really something new to TSM v6. For performance reasons, it has 
always been recommended to separate database and log on different spindles. 

As with v6, at least for larger implementations, the same rule of thumb 
applies: Try separating your DB paths on at least 4 different arrays/spindles, 
preferably 8 or more. Too many will not increase performance, rather reduce it. 
Put your active log / log mirror on spindles of it's own, as well as your 
archivelog / failover paths.

Watch out for sharing archivelog with something else (like storage) since the 
database backup trigger will look at the % used on the filesystem. So putting 
storagepools and then your archivelog in the same filesystem probably isnt the 
best of ideas.

Best Regards

Daniel Sparrman
-"ADSM: Dist Stor Manager"  skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/24/2011 11:06
Ärende: Re: Tsm v6 migration question

Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "


Re: Tsm v6 migration question

2011-03-24 Thread molin gregory
Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "


Re: Tsm v6 migration question

2011-03-23 Thread Anders Räntilä
My experience from the latest upgrades are that the database will 

grow by 10% when upgrading from 5.5 to 6.2.2.2.

 

2 days ago upgraded a 5.5 server to 6.2.2.2 and this happened 

with the database size:

 

 

**

*** ---> Q DB F=Dfrom TSM 5.5

**

 

 

Available Assigned  Pct  Max.   

Space Capacity  Util  Util  





   (MB)  

-  - - 

  120,000  120,00   74.2  74.2  


 

 

This is 120GB*74.2% = 89GB used

 

**

*** After the upgrade to 6.2.2.2 

**

 Database Name: TSMDB1

Total Size of File System (MB): 737,233

Space Used by Database(MB): 97,216

 

 

All the upgrades I've done during the last 6 months show the same pattern.

 

When TSM 6.1 was released IBM said the DB would grow approx. 3x (300%), 

but that doesn't seem to be the case today.

 

Of course if you start adding features like dedup the DB will grow. I 

have one installation that was 40GB in 5.5 and today is around 800GB.

This growth is mostly caused by dedup (add 1TB of DB disk and save 75TB 

of stgpool disk)

 

 

-Anders Räntilä

 

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

RÄNKAB - Räntilä Konsult AB  

Klippingvägen 23

SE-196 32 Kungsängen

Sweden

 

Email: and...@rantila.com  

Phone: +46 701 489 431

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

 

 

 

 


Re: Tsm v6 migration question

2011-03-23 Thread J. Pohlmann
Gary, please take a look at

http://www-01.ibm.com/support/docview.wss?uid=swg21421060&myns=swgtiv&mynp=O
CSSGSG7&mync=E

I found that the actual amount of space for the DB has been , 2 X. I
generally recommend that a file system of 3 X the v5 database be set aside.

Regards,

Joerg Pohlmann

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Lee, Gary D.
Sent: Wednesday, March 23, 2011 10:44
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we
will need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310


Tsm v6 migration question

2011-03-23 Thread Lee, Gary D.
We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

Migration, IP address change, and New Library at the same time - UPDATE

2011-03-16 Thread Howard Coles
UPDATE:  All of this went well.  There is no issue in changing the IP of
the host, just the host name apparently.  Did the usual of checking out
all the tapes from the old Library, migrating, changed IP address,
setting up new and checking the tapes in again.  It's all working.  Just
thought I'd let anyone who was wondering like me know.

 

Ladies and Gents, we're going to be migrating from TSM 5.5.3 to TSM
6.2.2, along with that we'll be moving to a new Library as well.  Up to
this point I don't see any issues as we already use the combo we'll be
putting in place at this location.

The problem is we have to maintain the current TSM server's IP address.
Which means that we'll need to use a temp address on the new server
until after the migration is complete, shut down the old server, change
IPs, and then bring up the new server.  

 

My question is has anyone here done this that would have suggestions on
steps related to that?  (i.e. set ip address on new server and change
old, do migration, other way around, etc.).

 

 

See Ya'

Howard Coles Jr.

John 3:16!


Re: Migration, IP address change, and New Library at the same time

2011-02-24 Thread Lee, Gary D.
The last time I changed servers, I had our network group change the name of the 
current server, then create a dns alias using our usual dns name to point to 
the old server.
On the day of the switch, we simply pointed that alias to the new tsm server 
and all went smoothly.  I insist that all my tsm clients refer to the server 
via dns entry, not ip address.
No one even knew that I had switched servers.

If your servers are "tsmold.co.com" and "tsmnew.co.com"
Have a new dns entry created for tsmold.co.com maybe tsmgone.co.com.
Now remove entry tsmold.co.com, create a new alias tsmold.co.com pointing to 
tsmgone.co.com. 

When you switch, simply point tsmold.co.com to tsmnew.co.com. If all your 
clients use dns entries, they will automagically point to the new server.
If you need to get to the old server, just change the dsm.opt to point at 
tsmgone.co.com.
Hope this helps.



Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Howard 
Coles
Sent: Wednesday, February 23, 2011 5:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Migration, IP address change, and New Library at the same time

Ladies and Gents, we're going to be migrating from TSM 5.5.3 to TSM
6.2.2, along with that we'll be moving to a new Library as well.  Up to
this point I don't see any issues as we already use the combo we'll be
putting in place at this location.

The problem is we have to maintain the current TSM server's IP address.
Which means that we'll need to use a temp address on the new server
until after the migration is complete, shut down the old server, change
IPs, and then bring up the new server.  

 

My question is has anyone here done this that would have suggestions on
steps related to that?  (i.e. set ip address on new server and change
old, do migration, other way around, etc.).

 

 

See Ya'

Howard Coles Jr.

John 3:16!


Migration, IP address change, and New Library at the same time

2011-02-23 Thread Howard Coles
Ladies and Gents, we're going to be migrating from TSM 5.5.3 to TSM
6.2.2, along with that we'll be moving to a new Library as well.  Up to
this point I don't see any issues as we already use the combo we'll be
putting in place at this location.

The problem is we have to maintain the current TSM server's IP address.
Which means that we'll need to use a temp address on the new server
until after the migration is complete, shut down the old server, change
IPs, and then bring up the new server.  

 

My question is has anyone here done this that would have suggestions on
steps related to that?  (i.e. set ip address on new server and change
old, do migration, other way around, etc.).

 

 

See Ya'

Howard Coles Jr.

John 3:16!


Re: Windows Cluster migration/update and TSM

2011-02-04 Thread Grigori Solonovitch
I do not think there is a simple way to upgrade Wndows 2003 32 bit to Windows 
2008 64 bit cluster. You have to be ready to create new 2008 cluster and 
migrate all cluster data from 2003 (almost manually).


From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] On Behalf Of Rhuobhe 
[tsm-fo...@backupcentral.com]
Sent: Friday, February 04, 2011 10:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Windows Cluster migration/update and TSM

Hello!

I have some users that are upgrading their 2003 (32-bit) 2 node cluster to 2008 
(64-bit) in the very near future. They plan on using the same two machines in 
what is called an "in-place" cluster migration. Does anyone have any experience 
with this?

I understand that the local hosts will probably do a full backup (no problem) 
but I am concerned that the resource groups or attributes may be modified is 
some way and generate a brand new back up. IBM has made it clear what could 
generate a new backup, and it is understood, but it is the Microsoft side I am 
unsure about. These resource groups hold more than 20 TB [Shocked] and I would 
like any advise on how to avoid this please.

Thanks

-TSM guy from Miami

+--
|This was sent by mlit...@med.miami.edu via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

Please consider the environment before printing this Email.

CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.


Re: Windows Cluster migration/update and TSM

2011-02-04 Thread Ben Bullock
The clustering on 2008 has changed significantly, so you are going to need to 
change things around.

I found this presentation very helpful in getting my mind wrapped around the 
steps and TSM components needing to be installed so you can use the CAD daemon 
on all instances of TSM on the cluster (locals and one or more shared TSM 
instances).

http://www-01.ibm.com/support/docview.wss?uid=swg27016259


We have a 2 node 2008 cluster with about 8 shared TSM instances (in this case 
file shares) that move around in events and works great.

Ben


From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] On Behalf Of Rhuobhe 
[tsm-fo...@backupcentral.com]
Sent: Friday, February 04, 2011 12:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Windows Cluster migration/update and TSM

Hello!

I have some users that are upgrading their 2003 (32-bit) 2 node cluster to 2008 
(64-bit) in the very near future. They plan on using the same two machines in 
what is called an "in-place" cluster migration. Does anyone have any experience 
with this?

I understand that the local hosts will probably do a full backup (no problem) 
but I am concerned that the resource groups or attributes may be modified is 
some way and generate a brand new back up. IBM has made it clear what could 
generate a new backup, and it is understood, but it is the Microsoft side I am 
unsure about. These resource groups hold more than 20 TB [Shocked] and I would 
like any advise on how to avoid this please.

Thanks

-TSM guy from Miami

+--
|This was sent by mlit...@med.miami.edu via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
The BCI Email Firewall made the following annotations
-
*Confidentiality Notice: 

This E-Mail is intended only for the use of the individual
or entity to which it is addressed and may contain
information that is privileged, confidential and exempt
from disclosure under applicable law. If you have received
this communication in error, please do not distribute, and
delete the original message. 

Thank you for your compliance.

You may contact us at:
Blue Cross of Idaho 
3000 E. Pine Ave.
Meridian, Idaho 83642
1.208.345.4550

-


Windows Cluster migration/update and TSM

2011-02-04 Thread Rhuobhe
Hello!

I have some users that are upgrading their 2003 (32-bit) 2 node cluster to 2008 
(64-bit) in the very near future. They plan on using the same two machines in 
what is called an "in-place" cluster migration. Does anyone have any experience 
with this?

I understand that the local hosts will probably do a full backup (no problem) 
but I am concerned that the resource groups or attributes may be modified is 
some way and generate a brand new back up. IBM has made it clear what could 
generate a new backup, and it is understood, but it is the Microsoft side I am 
unsure about. These resource groups hold more than 20 TB [Shocked] and I would 
like any advise on how to avoid this please.

Thanks

-TSM guy from Miami

+--
|This was sent by mlit...@med.miami.edu via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


TSM migration from 5.5 to 6.2

2011-01-27 Thread Ochs, Duane
Good day everyone,
I've been kicking around a couple different methods to migrate. In one of my 
sites I have the luxury of a new server and library to export to, which is 
working well.

Has anyone tried to install a fresh 6.2 instance on a current system running 
5.5 , then remove the library from the 5.5 instance and define it in the 6.2 
instance, redefine the library and drive paths on the 5.5  instance to the new 
6.2 library instance for library control, then migrate instance to instance ?

This site has a rather large 5.5 db and I'm trying to complete this without 
finding additional HW.
Thoughts ?

Duane Ochs
Information Technologies - Unix,Storage and Retention

Quad/Graphics
Innovative People Redefining Printtec
Sussex, Wisconsin
414-566-2375 phone
414-566-4010 pin# 2375 beeper
duane.o...@qg.com
www.QG.com


Follow Quad/Graphics in social media  
    
  



Re: Storage Pool Migration

2010-10-07 Thread Huebner,Andy,FORT WORTH,IT
Have you looked at move data?  You can move data from a volume to a pool.  We 
frequently use this to move data when we change storage systems.

Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Paul_Dudley
Sent: Wednesday, October 06, 2010 6:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Storage Pool Migration

We have TSM 5.5. Shortly after this was originally setup at the start of this 
year data from the DIRMC storage pool was migrated to tape due to the settings 
for the DIRMC storage pool not being correct. Now 9 months later the DIRMC 
storage pool is still only at 3% of capacity. The migrated data is still on the 
tape which is only at 0.2% of capacity. This is basically wasting a tape. What 
are my options to free up this tape and make it a scratch tape again? Can I 
migrate the data from the tape back to the primary disk pool? From what I have 
read it appears not. What happens if I delete the data on the tape? Do I have 
any other options?



Thanks & Regards

Paul



Paul Dudley

Senior IT Systems Administrator

ANL Container Line Pty Limited

Email:  <mailto:pdud...@anl.com.au> pdud...@anl.com.au










ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.

This e-mail (including any attachments) is confidential and may be legally 
privileged. If you are not an intended recipient or an authorized 
representative of an intended recipient, you are prohibited from using, copying 
or distributing the information in this e-mail or its attachments. If you have 
received this e-mail in error, please notify the sender immediately by return 
e-mail and delete all copies of this message and any attachments.

Thank you.


Re: Storage Pool Migration

2010-10-06 Thread Prather, Wanda
MOVE DATA  tapevolname  STGPOOL=dirmcpoolname

Will move data from that tape back to the DIRMC storage pool.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Paul_Dudley
Sent: Wednesday, October 06, 2010 7:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Storage Pool Migration

We have TSM 5.5. Shortly after this was originally setup at the start of this 
year data from the DIRMC storage pool was migrated to tape due to the settings 
for the DIRMC storage pool not being correct. Now 9 months later the DIRMC 
storage pool is still only at 3% of capacity. The migrated data is still on the 
tape which is only at 0.2% of capacity. This is basically wasting a tape. What 
are my options to free up this tape and make it a scratch tape again? Can I 
migrate the data from the tape back to the primary disk pool? From what I have 
read it appears not. What happens if I delete the data on the tape? Do I have 
any other options?



Thanks & Regards

Paul



Paul Dudley

Senior IT Systems Administrator

ANL Container Line Pty Limited

Email:  <mailto:pdud...@anl.com.au> pdud...@anl.com.au










ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


Storage Pool Migration

2010-10-06 Thread Paul_Dudley
We have TSM 5.5. Shortly after this was originally setup at the start of this 
year data from the DIRMC storage pool was migrated to tape due to the settings 
for the DIRMC storage pool not being correct. Now 9 months later the DIRMC 
storage pool is still only at 3% of capacity. The migrated data is still on the 
tape which is only at 0.2% of capacity. This is basically wasting a tape. What 
are my options to free up this tape and make it a scratch tape again? Can I 
migrate the data from the tape back to the primary disk pool? From what I have 
read it appears not. What happens if I delete the data on the tape? Do I have 
any other options?



Thanks & Regards

Paul



Paul Dudley

Senior IT Systems Administrator

ANL Container Line Pty Limited

Email:   pdud...@anl.com.au










ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


Re: TSM Server migration

2010-08-12 Thread Shawn Drew
I know that AIX can't go to Windows with a DB backup/restore, but I seem
to remember that it might be possible to go to another unix, pre-TSM6.
Might be worth looking into considering Linux is an option.

Also, keep in mind that the NAS data will need to continue to live on the
Library Manager.

If you do end up restoring/re-backing up the data, just back it up using
NFS/CIFS to avoid this situation in the future.   I'm currently trying to
get free of NDMP..

Regards,
Shawn





Internet
nbstr...@leggmason.com

Sent by: ADSM-L@VM.MARIST.EDU
08/12/2010 01:54 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] TSM Server migration






I have a TSM V5.5 server on AIX.  I need to get all of the data to a
Windows or Linux TSM environment on x86.

Export node will work well for everything EXCEPT a NAS node (you have to
really love these little caveats and exceptions).  A backup/restore or
unload/load will probably fail due to big-endian/little-endian
incompatibilities between the architectures.

Does anyone have any recommendations for getting the NAS node data to
the new server?

I am thinking that I will have to do a full restore to a spare NAS
device and then back it all up again.

The current server is idle and the new destination server has yet to be
built so it is wide open regarding linux/windows, TSM V5/V6 - I can
pretty much do what I need to do to get this data moved (goodness knows
that I shouldn't do what I want to do...gr!)


Your comments are appreciated.
Cheers,
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.



IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason
therefore recommends that you do not send any confidential or sensitive
information to us via electronic mail, including social security numbers,
account numbers, or personal identification numbers. Delivery, and or
timely delivery of Internet mail is not guaranteed. Legg Mason therefore
recommends that you do not send time sensitive
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged
or confidential information. Unless you are the intended recipient, you
may not use, copy or disclose to anyone any information contained in this
message. If you have received this message in error, please notify the
author by replying to this message and then kindly delete the message.
Thank you.



This message and any attachments (the "message") is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TSM Server migration

2010-08-12 Thread Remco Post
You mean that it's actually an archive? Anyway, yes, if the source no longer 
exists, then I guess you have to restore/backup...

On 12 aug 2010, at 20:40, Huebschman, George J. wrote:

> It is legacy data from legacy sources.
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
> Remco Post
> Sent: Thursday, August 12, 2010 2:38 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] TSM Server migration
> 
> On 12 aug 2010, at 19:54, Strand, Neil B. wrote:
>> 
>> I am thinking that I will have to do a full restore to a spare NAS
>> device and then back it all up again.
> 
> why not back it al up just from the source, you do a full backup once a
> week? anyway
> 
> 
> --
> Met vriendelijke groeten/Kind Regards,
> 
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622
> 
> IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
> therefore recommends that you do not send any confidential or sensitive 
> information to us via electronic mail, including social security numbers, 
> account numbers, or personal identification numbers. Delivery, and or timely 
> delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
> that you do not send time sensitive 
> or action-oriented messages to us via electronic mail.
> 
> This message is intended for the addressee only and may contain privileged or 
> confidential information. Unless you are the intended recipient, you may not 
> use, copy or disclose to anyone any information contained in this message. If 
> you have received this message in error, please notify the author by replying 
> to this message and then kindly delete the message. Thank you.

-- 
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: TSM Server migration

2010-08-12 Thread Huebschman, George J.
It is legacy data from legacy sources.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Remco Post
Sent: Thursday, August 12, 2010 2:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Server migration

On 12 aug 2010, at 19:54, Strand, Neil B. wrote:
>
> I am thinking that I will have to do a full restore to a spare NAS
> device and then back it all up again.

why not back it al up just from the source, you do a full backup once a
week? anyway


--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


Re: TSM Server migration

2010-08-12 Thread Remco Post
On 12 aug 2010, at 19:54, Strand, Neil B. wrote:
> 
> I am thinking that I will have to do a full restore to a spare NAS
> device and then back it all up again.

why not back it al up just from the source, you do a full backup once a week? 
anyway


-- 
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


TSM Server migration

2010-08-12 Thread Strand, Neil B.
I have a TSM V5.5 server on AIX.  I need to get all of the data to a
Windows or Linux TSM environment on x86.

Export node will work well for everything EXCEPT a NAS node (you have to
really love these little caveats and exceptions).  A backup/restore or
unload/load will probably fail due to big-endian/little-endian
incompatibilities between the architectures.

Does anyone have any recommendations for getting the NAS node data to
the new server?

I am thinking that I will have to do a full restore to a spare NAS
device and then back it all up again.

The current server is idle and the new destination server has yet to be
built so it is wide open regarding linux/windows, TSM V5/V6 - I can
pretty much do what I need to do to get this data moved (goodness knows
that I shouldn't do what I want to do...gr!)


Your comments are appreciated.
Cheers,
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.



IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


hsm client job migration

2010-07-13 Thread Tim Brown
Testing hsm client, when creating migration job. choose source files tab
and whether I choose New File or New Directory when I choose browse
and select drive it shows Drive C: on Server but the OK button doesnt
result in any response.


m Brown
Systems Specialist - Project Leader
Central Hudson Gas & Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.com < mailto:tbr...@cenhud.com <mailto:tbr...@cenhud.com> >
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255


This message contains confidential information and is only for the intended 
recipient.  If the reader of this message is not the intended recipient, or an 
employee or agent responsible for delivering this message to the intended 
recipient, please notify the sender immediately by replying to this note and 
deleting all copies and attachments.  Thank you.


Re: migration when restore is in progress

2010-06-01 Thread Richard Sims
On May 31, 2010, at 4:02 AM, Mehdi Salehi wrote:

> Hi experts,
> A client is restoring a large object (say a filesystem image) from disk
> storage pool. What happens If migration starts and above client is still
> restoring the object that should be migrated?

TSM realizes when a session or process is actively using a resource, and won't 
take it away.


migration when restore is in progress

2010-05-31 Thread Mehdi Salehi
Hi experts,
A client is restoring a large object (say a filesystem image) from disk
storage pool. What happens If migration starts and above client is still
restoring the object that should be migrated?

Thanks


Re: TSM 6.1.3.3 migration processes appear to idle

2010-05-25 Thread Richard Sims
I suspect that these thumb-twiddling processes are impeded by locks, but 
pursuing lock issues is beyond the ken of the software customer.

Richard Sims


Re: TSM 6.1.3.3 migration processes appear to idle

2010-05-25 Thread Keith Arbogast
Wanda,
I am seeing something roughly similar, but different enough that it may be 
irrelevant to your question-- or well understood, but not by me.  

My TSM server is on LInux and is at 5.5.4.1.  A backup of a primary tape pool 
to an offsite library is scripted to start 11 processes, assuming unneeded 
processes will terminate. Today, it starts all eleven processes, but only one 
actually copies files offsite. The others are idle, but have not allocated tape 
drives, and have not terminated. They look like this:
  
789 Backup Storage Pool  Primary Pool 3584INBA, Copy Pool ICTCCOPY, 
Files 
   Backed Up: 0, Bytes Backed Up: 0, 
Unreadable
   Files: 0, Unreadable Bytes: 0. Current 
Physical 
   File (bytes): None   
   

Best wishes,
Keith Arbogast

 

TSM 6.1.3.3 migration processes appear to idle

2010-05-24 Thread Prather, Wanda
This is one of those "anybody seen this?" questions.

This TSM server is unfortunately plagued with several clients that send
humongous files.

 

Migration is sometimes trigged by disk pool full, sometimes does not
occur until maintenance script starts migration by command.

3 migration processes run concurrently.

What I notice often is the situation as shown below.

One or two of the migration processes are busy chewing on these really
large files, and the others appear to be idle.

 

The apparently idle session (in this case 113) will sit for long periods
(sometimes hours) with:

- q proc showing "current physical file: NONE"

-no increase in files/bytes moved

-and q session shows no backup sessions in progress, either

 

I'm naturally wondering why it's sitting there.

 

Does anybody else see this behavior in any release of the server?

Anybody know if this is WAD?  

 

I would certainly prefer the extra session to terminate and release the
drive.

 

Thx

 

 

112 MigrationDisk Storage Pool DISKPOOL, Moved
Files: 293,

Moved Bytes: 1,157,898,240,
Unreadable Files: 0,

Unreadable Bytes: 0. Current
Physical File  

(bytes): 554,838,114,304 Current
output volume: 

    700001.


  113 MigrationDisk Storage Pool DISKPOOL, Moved
Files: 189,

Moved Bytes: 210,037,596,160,
Unreadable Files: 

0, Unreadable Bytes: 0. Current
Physical File   

(bytes): None Current output
volume: 700127.

  114 MigrationDisk Storage Pool DISKPOOL, Moved
Files: 10825,  

Moved Bytes: 313,846,050,816,
Unreadable Files: 

0, Unreadable Bytes: 0. Current
Physical File   

(bytes): 395,597,819,904 Current
output volume: 

700071.


 

Wanda Prather  |  Senior Technical Specialist  | wprat...@icfi.com  |
www.jasi.com

ICF Jacob & Sundstrom  | 401 E. Pratt St, Suite 2214, Baltimore, MD
21202 | 410.539.1135  

 


Re: what filespaces are candidate for migration?

2010-05-18 Thread Mehdi Salehi
That was a misunderstanding, David.

Regards,
Mehdi


Re: what filespaces are candidate for migration?

2010-05-17 Thread David McClelland
>> Thanks for the link.

No problem.

>> P.S: Nobody is born as expert.

Don't get me wrong, I'm not trying to push myself as more or less an expert
than anyone else on here, and I apologise if it came across otherwise.

My point is simply that I feel it is polite and good etiquette to the other
members of the list to do a least a little research into our own questions
first of all, either by searching previous posts, doing a simple Google or
lookup on IBM.com - the IBM docs are authoritative and comprehensive and,
with TSM Information Centre at least, very easy to search or look up in the
index. Obviously, they don't answer all questions, and that's when a user
community like this excels.

Rgds,

//David Mc, London UK


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Mehdi Salehi
Sent: 17 May 2010 10:53
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] what filespaces are candidate for migration?

Thanks for the link.

P.S:
Nobody is born as expert.

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.819 / Virus Database: 271.1.1/2870 - Release Date: 05/16/10
19:26:00


Re: what filespaces are candidate for migration?

2010-05-17 Thread Mehdi Salehi
Thanks for the link.

P.S:
Nobody is born as expert.


Re: what filespaces are candidate for migration?

2010-05-17 Thread David McClelland
As always, this basic stuff is easily accessible the IBM Docs >>
http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmaix
n.doc/anragd55295.htm#idx921 "How Server Selects Files to Migrate"

//DMc, London, UK

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Mehdi Salehi
Sent: 17 May 2010 09:53
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] what filespaces are candidate for migration?

Hi,
- When a storage pool reaches the high threshold of migration, how TSM
selects filespaces to migrate?
- When a filespace is migrated, does it include all active and inactive
backups?

Thanks

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.819 / Virus Database: 271.1.1/2870 - Release Date: 05/16/10
19:26:00


what filespaces are candidate for migration?

2010-05-17 Thread Mehdi Salehi
Hi,
- When a storage pool reaches the high threshold of migration, how TSM
selects filespaces to migrate?
- When a filespace is migrated, does it include all active and inactive
backups?

Thanks


Re: TSM 5.4 to 5.5 migration

2010-03-31 Thread Howard Coles
Yoda, I don't want to sound condescending so please excuse me if it
does.  It appears by some of the questions you're asking that you would
be GREATLY benefited by taking some time and reading at least the TSM
Redbooks, and some of the Admin guide.  I would recommend the deployment
Guide Redbook, and the Technical guide.

To be specific on the question below, you need to login to the TSM
server console and run "halt" after verifying that there are no
processes, and no tapes mounted.  Then make sure all TSM Services are
stopped.   I would also imagine that you would need to be logged into
the Windows Server as Administrator or equivalent.  


See Ya'
Howard Coles Jr.
John 3:16!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
yoda woya
Sent: Wednesday, March 31, 2010 1:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM 5.4 to 5.5 migration

why does 5.4 need to be uninstalled?
if the add/remove  programs is not allowing uninstall, what are other
avenues are there to uninstall


TSM 5.4 to 5.5 migration

2010-03-31 Thread yoda woya
why does 5.4 need to be uninstalled?
if the add/remove  programs is not allowing uninstall, what are other
avenues are there to uninstall


Re: Migration to a new SAN Switch

2010-03-11 Thread Richard Rhodes
Were you running ExtendedFabric on the old switch?  If so, is it setup on
the new switch ports for the ISL's over the dwdm?

Check for port errors on the vtl connection  and isl links.

Rick





 woodbm
To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
      Migration to a new SAN Switch


 03/11/2010 02:26
 PM


 Please respond to
 ads...@vm.marist.
EDU






Hello all,

I need some help.  I recently moved to a new Brocade SAN switch running 4
GB.  My copy pools, which interface through a SAN switch and down the DWDN
to get to my copy pool library which interfaces the same way, are running
very poorly.  They average around 7 MB/s.  Prior to the move we were
running around 40 MB/s.  I am using a EMC EDL4100 VTL.  I have run
diagnostics on the library and that doesn't seem to be the issue.  It has
to be some setting on the new switch, however I am not a SAN expert.  We
are running TSM 5.5.2 on AIX 6.1.  I would appreciate any suggestions you
may have.

Thanks
Bryan

+--
|This was sent by woo...@nu.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


  1   2   3   4   5   6   7   8   9   10   >