Re: Migration should preempt reclamation

2016-02-18 Thread David Ehresman
Could you reduce the "Reclamation Processes" count on the storagepool by one leaving a tape drive free for migration? David -Original Message- From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger Deschner Sent: Wednesday, February 17, 2016 8:43 PM To: ADSM-L@VM

Re: Migration should preempt reclamation

2016-02-18 Thread Rhodes, Richard L.
This is a tough one. On the one hand we want Reclamation to use as many tape drives as possible, but not consume them all. We also have multiple TSM instances wanting library resources. The TSM instances are blind to each others needs. This _IS_ difficult to control. The _current_ solution

Re: Migration should preempt reclamation

2016-02-18 Thread Skylar Thompson
We've had this problem as well. Our fix has been to define a maintenance script MANUAL_RECLAIM that reclaims each storage pool in parallel, but with a duration of 3 hours: PARALLEL RECL STG DESPOT-OFFSITE-LTO TH=60 DU=180 W=Y ... SERIAL An admin schedule will run the script every four hours, exce

Re: Migration should preempt reclamation

2016-02-18 Thread Richard Cowen
A similar script could use MOVE DATA instead of RECLAIM, which has the advantage of checking as each MOVE ends to see if there are drive resources, and intelligently picking a volume, before starting a new MOVE. It also can check an external file for a "pause" or "halt" command, or parse the ac

Re: Migration should preempt reclamation

2016-02-18 Thread Rick Adamson
Roger Worth mentioning is that the issue is most likely compounded by the migration/reclamation process running during a backup, the performance hit to the TSM server can be severe. You may find a significant reduction in task processing times if you do isolate them. In the past I have temporar

more than one vtl per storage pool

2016-02-18 Thread Lee, Gary
We are currently running tsm 6.3.4 with plans to upgrade to 7.1.x some time this year. Our tape libraries (3494s) go off support next year. I have been informed that our solution will be amazon s3 storage. Using their storage gateway and vtl interface certain limits are imposed. 1. Limi

Replicate node to container storage

2016-02-18 Thread David Ehresman
Anyone doing replicate node from non-container storage to a target server storing the data in a container? I'm in early days and am getting about 500G/12 hours. I was hoping for better than that. Neither source nor target server appeared to be stressed. What kind of throughput are you gettin

WAN performance

2016-02-18 Thread Tom Alverson
I am seeing very poor WAN performance on all of my (wan based) TSM backups. Due to the latency (40 msec typical) I normally only get about 20% of the available bandwidth used by a TSM backup. With EMC Networker I get over 90% utilization. I have already set all of these recommended options: RES

Re: WAN performance

2016-02-18 Thread Skylar Thompson
I thought TCPBUFFSIZE could only go up to 64? It could be that setting it to 512 actually sets it to the default of 16. On Thu, Feb 18, 2016 at 02:03:26PM -0500, Tom Alverson wrote: > I am seeing very poor WAN performance on all of my (wan based) TSM > backups. Due to the latency (40 msec typical

Re: WAN performance

2016-02-18 Thread Hans Christian Riksheim
I have had luck with setting tcpwindowsize 0 on server and client and letting the OS handle it. Also diskbuffsize 512. Hans Chr. On Thu, Feb 18, 2016 at 8:15 PM, Skylar Thompson wrote: > I thought TCPBUFFSIZE could only go up to 64? It could be that setting it > to 512 actually sets it to the d

Re: WAN performance

2016-02-18 Thread Tom Alverson
Looks like 512 is OK in the 7.1 docs: *IBM Tivoli Storage Manager, Version 7.1* -- Tcpbuffsize The tcpbuffsize option specifies the size of the internal TCP/IP communication buffer used to transfer data between the client node and server. Although it uses more memory,

Re: more than one vtl per storage pool

2016-02-18 Thread Remco Post
> On 18 feb. 2016, at 17:39, Lee, Gary wrote: > > We are currently running tsm 6.3.4 with plans to upgrade to 7.1.x some time > this year. > > Our tape libraries (3494s) go off support next year. I have been informed > that our solution will be amazon s3 storage. > > Using their storage gatew

Re: more than one vtl per storage pool

2016-02-18 Thread white jeff
So two or more libraries? A stgpool needs a device class. The devc would (i assume) be LTO, and the device class needs a library name i.e. def devc devc1 library=lib1 etc The library definition i assume is VTL def lib lib1 libtype=vtl serial=autodetect/ ...etc So if there was a second lib

Re: WAN performance

2016-02-18 Thread Tom Alverson
Thanks for that info. I tried both of those settings at the same time and it seems to have helped. I will do some more testing but will probably keep both settings even if only one is helping the WAN speed. The DISKBUFFSIZE can be set as high as 1023. (those are only kbytes). Is there any reaso

Re: WAN performance

2016-02-18 Thread Hans Christian Riksheim
1023 should be fine. I may have fiddled with the parameter at a time when 512 was the max. I remember that DISKBUFFSIZE had vast impact on the performance when backing up a NAS from CIFS share and via a high speed, high latency link to the TSM server. Hans Chr. On Thu, Feb 18, 2016 at 10:50 PM,