cancel a session with MediaW
Hi, how to cancel a session with media wait in a manual library? can sess does not work. TSM keeps the waiting session even if you restart the client node. (TSM server 5.3 in AIX 5.3) The session will die after the 60 minutes of timeout. Thanks
Re: cancel a session with MediaW
Some time ago I posed this question to IBM and as I understand it their response was that an active session will not cancel until it complete its current task. For example, if a process or session has requested a tape/volume and you issue a command to cancel the session and/or process TSM will queue your cancel request and wait on the mount to complete. My understanding is that this is by design to avoid interrupting a task that could have undesirable consequences. ~Rick -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Mehdi Salehi Sent: Wednesday, August 18, 2010 5:55 AM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] cancel a session with MediaW Hi, how to cancel a session with media wait in a manual library? can sess does not work. TSM keeps the waiting session even if you restart the client node. (TSM server 5.3 in AIX 5.3) The session will die after the 60 minutes of timeout. Thanks
Re: cancel a session with MediaW
They did change the way this operates after TSM 5.3 although I can't remember precisely which version. It was changed by at least 5.5.2 I've noticed processes finishing much more quickly when I cancel them. For example, if a Migration is working on a big 800GB file and I tried to cancel the process. I would have to wait until it finished that file before. That doesn't happen anymore and the process cancels immediately now. I'm not sure if that directly correlates with this issue as I haven't canceled a mediaW session in a while. Regards, Shawn Shawn Drew Internet rickadam...@winn-dixie.com Sent by: ADSM-L@VM.MARIST.EDU 08/18/2010 09:05 AM Please respond to ADSM-L@VM.MARIST.EDU To ADSM-L cc Subject Re: [ADSM-L] cancel a session with MediaW Some time ago I posed this question to IBM and as I understand it their response was that an active session will not cancel until it complete its current task. For example, if a process or session has requested a tape/volume and you issue a command to cancel the session and/or process TSM will queue your cancel request and wait on the mount to complete. My understanding is that this is by design to avoid interrupting a task that could have undesirable consequences. ~Rick -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Mehdi Salehi Sent: Wednesday, August 18, 2010 5:55 AM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] cancel a session with MediaW Hi, how to cancel a session with media wait in a manual library? can sess does not work. TSM keeps the waiting session even if you restart the client node. (TSM server 5.3 in AIX 5.3) The session will die after the 60 minutes of timeout. Thanks This message and any attachments (the message) is intended solely for the addressees and is confidential. If you receive this message in error, please delete it and immediately notify the sender. Any use not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. The internet can not guarantee the integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore be liable for the message if modified. Please note that certain functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.
Re: cancel a session with MediaW
Agreed, I noticed this also Shawn ~Rick -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn Drew Sent: Wednesday, August 18, 2010 10:19 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] cancel a session with MediaW They did change the way this operates after TSM 5.3 although I can't remember precisely which version. It was changed by at least 5.5.2 I've noticed processes finishing much more quickly when I cancel them. For example, if a Migration is working on a big 800GB file and I tried to cancel the process. I would have to wait until it finished that file before. That doesn't happen anymore and the process cancels immediately now. I'm not sure if that directly correlates with this issue as I haven't canceled a mediaW session in a while. Regards, Shawn Shawn Drew Internet rickadam...@winn-dixie.com Sent by: ADSM-L@VM.MARIST.EDU 08/18/2010 09:05 AM Please respond to ADSM-L@VM.MARIST.EDU To ADSM-L cc Subject Re: [ADSM-L] cancel a session with MediaW Some time ago I posed this question to IBM and as I understand it their response was that an active session will not cancel until it complete its current task. For example, if a process or session has requested a tape/volume and you issue a command to cancel the session and/or process TSM will queue your cancel request and wait on the mount to complete. My understanding is that this is by design to avoid interrupting a task that could have undesirable consequences. ~Rick -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Mehdi Salehi Sent: Wednesday, August 18, 2010 5:55 AM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] cancel a session with MediaW Hi, how to cancel a session with media wait in a manual library? can sess does not work. TSM keeps the waiting session even if you restart the client node. (TSM server 5.3 in AIX 5.3) The session will die after the 60 minutes of timeout. Thanks This message and any attachments (the message) is intended solely for the addressees and is confidential. If you receive this message in error, please delete it and immediately notify the sender. Any use not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. The internet can not guarantee the integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore be liable for the message if modified. Please note that certain functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.
TDPSQL restore - MediaW
Hi, I'm on TSM server v.5.2 Win2000, TSM TDP for SQL client v.5.2 Win2000 I've done TDPSQL restores in the past successfully, but now I've got a strange problem. There are two sessions between server and client. One is on SendW, the other on MediaW. There is only one mount and I've got three empty drives. One of the tape paths has a problem and is taken offline. I wonder why the mount is not done on the other empty drives. Any suggestions ? Thanks Yiannakis Yiannakis Vakis Systems Support Group, I.T.Division Tel. 22-848523, 99-414788, Fax. 22-337770
Re: TDPSQL restore - MediaW
On Feb 2, 2005, at 5:09 AM, Yiannakis Vakis wrote: I'm on TSM server v.5.2 Win2000, TSM TDP for SQL client v.5.2 Win2000 I've done TDPSQL restores in the past successfully, but now I've got a strange problem. There are two sessions between server and client. One is on SendW, the other on MediaW. There is only one mount and I've got three empty drives. One of the tape paths has a problem and is taken offline. I wonder why the mount is not done on the other empty drives. Any suggestions ? Pursue the details of the situation. You report MediaW - but you need to pursue that in detail to see whether it is waiting on a tape or a drive. Might it be the case that the drive with the problem has the needed tape stuck in it, for example? In issuing a 'Query SEssion' command, be sure to use Format=Detail to get the whole story. You can also issue 'SHow LIBRary' to get more physical info about the state of library and drives, if necessary. Sometimes it is necessary to visually inspect drives to see what's going on. Richard Sims
Re: TDPSQL restore - MediaW
Not sure if this applies to you or not, but we ran into the same type of problem trying to do a multi-session restore using tdp-sql. The details are a bit fuzzy since it has been a while, but the problem was caused by multiple restore streams looking for the same media. When we ran a single restore stream we did not have the problem. Others may have more detailed insight on how to configure tdp to allow problem-free multi-session restores, but we simply decided to opt for single streams. -Original Message- From: Yiannakis Vakis [mailto:[EMAIL PROTECTED] Sent: Wednesday, February 02, 2005 5:10 AM To: ADSM-L@VM.MARIST.EDU Subject: TDPSQL restore - MediaW Hi, I'm on TSM server v.5.2 Win2000, TSM TDP for SQL client v.5.2 Win2000 I've done TDPSQL restores in the past successfully, but now I've got a strange problem. There are two sessions between server and client. One is on SendW, the other on MediaW. There is only one mount and I've got three empty drives. One of the tape paths has a problem and is taken offline. I wonder why the mount is not done on the other empty drives. Any suggestions ? Thanks Yiannakis Yiannakis Vakis Systems Support Group, I.T.Division Tel. 22-848523, 99-414788, Fax. 22-337770
Re: TDPSQL restore - MediaW
If you want to perform multi-session (STRIPES) backup and restore, you should set COLLOCATION by filespace turned on so when the data is migrated or sent to tape, the striped data will remain on separate tapes. This is discussed in the User's Guide. Thanks, Del ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 02/02/2005 12:01:52 PM: Not sure if this applies to you or not, but we ran into the same type of problem trying to do a multi-session restore using tdp-sql. The details are a bit fuzzy since it has been a while, but the problem was caused by multiple restore streams looking for the same media. When we ran a single restore stream we did not have the problem. Others may have more detailed insight on how to configure tdp to allow problem-free multi-session restores, but we simply decided to opt for single streams. -Original Message- From: Yiannakis Vakis [mailto:[EMAIL PROTECTED] Sent: Wednesday, February 02, 2005 5:10 AM To: ADSM-L@VM.MARIST.EDU Subject: TDPSQL restore - MediaW Hi, I'm on TSM server v.5.2 Win2000, TSM TDP for SQL client v.5.2 Win2000 I've done TDPSQL restores in the past successfully, but now I've got a strange problem. There are two sessions between server and client. One is on SendW, the other on MediaW. There is only one mount and I've got three empty drives. One of the tape paths has a problem and is taken offline. I wonder why the mount is not done on the other empty drives. Any suggestions ? Thanks Yiannakis Yiannakis Vakis Systems Support Group, I.T.Division Tel. 22-848523, 99-414788, Fax. 22-337770
Re: Client backups in MediaW state
This was already discussed on the list. Set manummp=0 for lower priority nodes, lower your highmig threshold for the diskpool and/or increase the pool as Ryan already suggested. Disks do not cost a fortune today. Zlatko Krastev IT Consultant Brazner, Bob [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 19.06.2003 17:50 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Client backups in MediaW state We often run into this situation: Client backups normally go to the disk storage pool, but if it fills up, they start going directly to tape. We have many more clients than tape drives, so we eventually have a number of clients in MediaW state. Eventually, migration frees up a good portion of the disk storage pool, but we don't see the clients in MediaW state becoming aware of this. Instead, they just sit in MediaW state waiting in line for a tape to become available. Is there a sure-fire way to cause all the client sessions (or a given client session) in MediaW state to immediately revert to using the disk storage pool again? If the answer is no, is there a way to raise the priority of a given client session in MediaW state so that it gets the next tape that becomes available? Bob Brazner Johnson Controls, Inc. (414) 524-2570
Client backups in MediaW state
We often run into this situation: Client backups normally go to the disk storage pool, but if it fills up, they start going directly to tape. We have many more clients than tape drives, so we eventually have a number of clients in MediaW state. Eventually, migration frees up a good portion of the disk storage pool, but we don't see the clients in MediaW state becoming aware of this. Instead, they just sit in MediaW state waiting in line for a tape to become available. Is there a sure-fire way to cause all the client sessions (or a given client session) in MediaW state to immediately revert to using the disk storage pool again? If the answer is no, is there a way to raise the priority of a given client session in MediaW state so that it gets the next tape that becomes available? Bob Brazner Johnson Controls, Inc. (414) 524-2570
Re: Client backups in MediaW state
increase your disk pool, thats the best solution -Original Message- From: Brazner, Bob [mailto:[EMAIL PROTECTED] Sent: Thursday, June 19, 2003 9:50 AM To: [EMAIL PROTECTED] Subject: Client backups in MediaW state We often run into this situation: Client backups normally go to the disk storage pool, but if it fills up, they start going directly to tape. We have many more clients than tape drives, so we eventually have a number of clients in MediaW state. Eventually, migration frees up a good portion of the disk storage pool, but we don't see the clients in MediaW state becoming aware of this. Instead, they just sit in MediaW state waiting in line for a tape to become available. Is there a sure-fire way to cause all the client sessions (or a given client session) in MediaW state to immediately revert to using the disk storage pool again? If the answer is no, is there a way to raise the priority of a given client session in MediaW state so that it gets the next tape that becomes available? Bob Brazner Johnson Controls, Inc. (414) 524-2570
Re: AW: MediaW
Richard, I'm glad you mentioned VirtualMountPoint. Is there ANY way to simulate that in Windows? Gosh, Tab, are you still using the world's last wholly proprietary operating system and enduring all the pain of System Objects?? ;-) I'm not aware of any way in Windows to achieve the same effect. VIRTUALMountpoint is limited to the Unix environment. (And, oddly, though Macintosh OS X is certainly Unix, the TSM Mac client does not support VIRTUALMountpoint. As of TSM 5.2, the Mac client does support Unicode, as the Windows client has. Your features may vary.) Richard Sims, BU
Re: AW: MediaW
Richard, Thanks. Gosh, Tab, are you still using the world's last wholly proprietary operating system and enduring all the pain of System Objects?? ;-) Yes. Even though we sometimes have to spend as much as $2500 for a new server, and only about 98 out of every 100 job applicants know anything about it. ;-) I have another approach I want to try to simulate virtual mount points in Windows. If successful, I'll share with the forum. Tab Richard Sims [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 06/16/2003 08:16 AM Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: AW: MediaW Richard, I'm glad you mentioned VirtualMountPoint. Is there ANY way to simulate that in Windows? Gosh, Tab, are you still using the world's last wholly proprietary operating system and enduring all the pain of System Objects?? ;-) I'm not aware of any way in Windows to achieve the same effect. VIRTUALMountpoint is limited to the Unix environment. (And, oddly, though Macintosh OS X is certainly Unix, the TSM Mac client does not support VIRTUALMountpoint. As of TSM 5.2, the Mac client does support Unicode, as the Windows client has. Your features may vary.) Richard Sims, BU
Re: AW: MediaW
Tab, I'm thinking you could have a script/cmd file that shares out the directories on the drive, then assigns drive letters to the shares, and another script that undoes this, and have them as your pre and post-sched commands. Or even a script that would share a directory, attach to it, back it up, detach, unshare, then go on to the next one. What you're trying to do isn't really pretty, but might be fun to play with. Nick Cassimatis [EMAIL PROTECTED] Think twice, type once.
Re: MediaW
Our target is satisfactory restore time. It has been our experience that if one has adequate set up for restore the backups run just fine. In our shop, adequate restore time requires collocation. That said, in my shop one must also take into account workplace politics. If that is not the case at your place of employment, good for you! That begs the question of why a collocated session can not have multiple backup threads. In my ideal design of things, if a collocated backup uses multiple sessions/threads, then it uses multiple tapes instead of having one session wait until the 1st is done with a single tape. David [EMAIL PROTECTED] 06/13/03 12:57AM What is your *exact* target - collocation, IBM/Tivoli happiness or faster backups/restores. Have in mind that even IBMers are human beings and can err. Sometimes disabling collocation might give *improvements* (usually does not) Zlatko Krastev IT Consultant David E Ehresman [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 12.06.2003 15:29 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: MediaW IBM/TSM reps spent a lot of capital and management spent a lot of money to get us to an environment where we COULD collocate. There's no going back now. David [EMAIL PROTECTED] 06/12/03 03:21AM David, why not to create another primary pool and direct that node in such own pool (or shared with few similar requirements nodes). Setting collocation off will have little or no impact on your restores. In fact allowing backups to parallelize, you will set the ground for parallel restores. Zlatko Krastev IT Consultant David E Ehresman [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 11.06.2003 17:56 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: MediaW Thanks Geoff. We'd much rather have the session wait during backups than during restore so we'll leave collocation on and ignore the MediaW. David [EMAIL PROTECTED] 06/11/03 10:32AM David, We have had these same types of problems when collocation is set to yes. Try turning it off and running your backup again, I bet you will see it pick up more drives. Geoff -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 11, 2003 9:27 AM To: [EMAIL PROTECTED] Subject: Re: MediaW Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
AW: MediaW
Not quite sure where is you problem: You want both collocation and parallel backup threads. Using a (large enough) disk primary storage pool as backup cache gives you the ability to backup using many threads. Migrationg from disk to tape can use multiple tapes as well. The only things which does not work is concurrent migration of single node´s/filesystem data to more tapes, but this would definitely preclude your own collocation requirement!? And - this limitation causes significant slowdown only if you have one exta large node/filesystem. Once you have 2 or more large nodes it does not matter. Or do you think about restores while speaking about backups? In this case Zlatko´s reply about disabling collocation might make sense. regards Juraj -Ursprüngliche Nachricht- Von: David E Ehresman [mailto:[EMAIL PROTECTED] Gesendet: Freitag, 13. Juni 2003 13:28 An: [EMAIL PROTECTED] Betreff: Re: MediaW Our target is satisfactory restore time. It has been our experience that if one has adequate set up for restore the backups run just fine. In our shop, adequate restore time requires collocation. That said, in my shop one must also take into account workplace politics. If that is not the case at your place of employment, good for you! That begs the question of why a collocated session can not have multiple backup threads. In my ideal design of things, if a collocated backup uses multiple sessions/threads, then it uses multiple tapes instead of having one session wait until the 1st is done with a single tape. David [EMAIL PROTECTED] 06/13/03 12:57AM What is your *exact* target - collocation, IBM/Tivoli happiness or faster backups/restores. Have in mind that even IBMers are human beings and can err. Sometimes disabling collocation might give *improvements* (usually does not) Zlatko Krastev IT Consultant David E Ehresman [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 12.06.2003 15:29 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: MediaW IBM/TSM reps spent a lot of capital and management spent a lot of money to get us to an environment where we COULD collocate. There's no going back now. David [EMAIL PROTECTED] 06/12/03 03:21AM David, why not to create another primary pool and direct that node in such own pool (or shared with few similar requirements nodes). Setting collocation off will have little or no impact on your restores. In fact allowing backups to parallelize, you will set the ground for parallel restores. Zlatko Krastev IT Consultant David E Ehresman [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 11.06.2003 17:56 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: MediaW Thanks Geoff. We'd much rather have the session wait during backups than during restore so we'll leave collocation on and ignore the MediaW. David [EMAIL PROTECTED] 06/11/03 10:32AM David, We have had these same types of problems when collocation is set to yes. Try turning it off and running your backup again, I bet you will see it pick up more drives. Geoff -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 11, 2003 9:27 AM To: [EMAIL PROTECTED] Subject: Re: MediaW Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: AW: MediaW
That begs the question of why a collocated session can not have multiple backup threads. Collocation is often diminished in List discussions for lack of qualification. That is, there are two types of Collocation: by node, and by filespace. Therein lies additional opportunity, further enhanced by VIRTUALMountpoint. Subdivide and conquer. The beauty of the product is all the flexibility it offers in tailoring backup and restoral. Richard Sims, BU
Re: AW: MediaW
Richard, I'm glad you mentioned VirtualMountPoint. Is there ANY way to simulate that in Windows? What I've found is that the TSM client can back up the local machine via its shares, and that gives better granularity if you want to use backupsets,especially on a large file server. But what I would like to do is create a share point that no one can connect to so that we don't end up with wierd mappings out in the field. But once you do that, then TSM can't connect to it either, and you're right back to going through the local file system. Any ideas? Thanks. Tab Trepagnier TSM Administrator Laitram LLC Richard Sims [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 06/13/2003 08:08 AM Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: AW: MediaW That begs the question of why a collocated session can not have multiple backup threads. Collocation is often diminished in List discussions for lack of qualification. That is, there are two types of Collocation: by node, and by filespace. Therein lies additional opportunity, further enhanced by VIRTUALMountpoint. Subdivide and conquer. The beauty of the product is all the flexibility it offers in tailoring backup and restoral. Richard Sims, BU
Re: MediaW
David, why not to create another primary pool and direct that node in such own pool (or shared with few similar requirements nodes). Setting collocation off will have little or no impact on your restores. In fact allowing backups to parallelize, you will set the ground for parallel restores. Zlatko Krastev IT Consultant David E Ehresman [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 11.06.2003 17:56 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: MediaW Thanks Geoff. We'd much rather have the session wait during backups than during restore so we'll leave collocation on and ignore the MediaW. David [EMAIL PROTECTED] 06/11/03 10:32AM David, We have had these same types of problems when collocation is set to yes. Try turning it off and running your backup again, I bet you will see it pick up more drives. Geoff -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 11, 2003 9:27 AM To: [EMAIL PROTECTED] Subject: Re: MediaW Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
IBM/TSM reps spent a lot of capital and management spent a lot of money to get us to an environment where we COULD collocate. There's no going back now. David [EMAIL PROTECTED] 06/12/03 03:21AM David, why not to create another primary pool and direct that node in such own pool (or shared with few similar requirements nodes). Setting collocation off will have little or no impact on your restores. In fact allowing backups to parallelize, you will set the ground for parallel restores. Zlatko Krastev IT Consultant David E Ehresman [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 11.06.2003 17:56 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: MediaW Thanks Geoff. We'd much rather have the session wait during backups than during restore so we'll leave collocation on and ignore the MediaW. David [EMAIL PROTECTED] 06/11/03 10:32AM David, We have had these same types of problems when collocation is set to yes. Try turning it off and running your backup again, I bet you will see it pick up more drives. Geoff -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 11, 2003 9:27 AM To: [EMAIL PROTECTED] Subject: Re: MediaW Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
What is your *exact* target - collocation, IBM/Tivoli happiness or faster backups/restores. Have in mind that even IBMers are human beings and can err. Sometimes disabling collocation might give *improvements* (usually does not) Zlatko Krastev IT Consultant David E Ehresman [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 12.06.2003 15:29 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: MediaW IBM/TSM reps spent a lot of capital and management spent a lot of money to get us to an environment where we COULD collocate. There's no going back now. David [EMAIL PROTECTED] 06/12/03 03:21AM David, why not to create another primary pool and direct that node in such own pool (or shared with few similar requirements nodes). Setting collocation off will have little or no impact on your restores. In fact allowing backups to parallelize, you will set the ground for parallel restores. Zlatko Krastev IT Consultant David E Ehresman [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 11.06.2003 17:56 Please respond to ADSM: Dist Stor Manager To: [EMAIL PROTECTED] cc: Subject:Re: MediaW Thanks Geoff. We'd much rather have the session wait during backups than during restore so we'll leave collocation on and ignore the MediaW. David [EMAIL PROTECTED] 06/11/03 10:32AM David, We have had these same types of problems when collocation is set to yes. Try turning it off and running your backup again, I bet you will see it pick up more drives. Geoff -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 11, 2003 9:27 AM To: [EMAIL PROTECTED] Subject: Re: MediaW Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
Yes, there are adequate scratch volumes and drives and paths are online. Thanks, David [EMAIL PROTECTED] 06/10/03 03:48PM Any ideas why a tape is not being mounted? This may not be the issue, but is the destination stgpool configured with an adequate number of scratch volumes allowed (MAXSCRATCH)? One other thing to look at - are all drives and paths online? Ted
Re: MediaW
David, We have had these same types of problems when collocation is set to yes. Try turning it off and running your backup again, I bet you will see it pick up more drives. Geoff -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 11, 2003 9:27 AM To: [EMAIL PROTECTED] Subject: Re: MediaW Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
I ran into a similar situation recently. We just recently upgraded to v5.1.x and had no knowledge of the Q PATH command. It appears that when drives go offline in v5.x due to a hardware problem their path can go offline as well. I corrected the problem by doing an update path command. Helpful commands: Q(uery) PATH UPD(ate) PATH DEF(ine) PATH Possibly this is your problem. Greg P. Tice Enterprise Storage Management Schneider National, Inc. www.schneider.com David E Ehresman [EMAIL PROTECTED]To: [EMAIL PROTECTED] LLE.EDU cc: Sent by: ADSM: Fax to: Dist StorSubject: Re: MediaW Manager [EMAIL PROTECTED] .EDU 06/11/2003 09:26 Please respond to ADSM: Dist Stor Manager Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
Thanks Geoff. We'd much rather have the session wait during backups than during restore so we'll leave collocation on and ignore the MediaW. David [EMAIL PROTECTED] 06/11/03 10:32AM David, We have had these same types of problems when collocation is set to yes. Try turning it off and running your backup again, I bet you will see it pick up more drives. Geoff -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 11, 2003 9:27 AM To: [EMAIL PROTECTED] Subject: Re: MediaW Collocation is set YES. Can collocation not go to multiple tapes? [EMAIL PROTECTED] 06/10/03 03:44PM David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
If warranted, you can momentarily toggle the active tape volume status from READWrite to READOnly, then back, to cause the multiple sessions to each get their own output volumes. Richard Sims, BU
MediaW
I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
David, Have you verified that collocation is set to NO on the tape storage pool that this client is backing up to. Geoff Raymer EDS - Tulsa BUR and Leveraged Storage MS 326 4000 N. Mingo Road Tulsa, OK 74116-5020 * phone: +01-918-292-4364 * cell: +01-918-629-1819 * mailto:[EMAIL PROTECTED] www.eds.com -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW
Any ideas why a tape is not being mounted? This may not be the issue, but is the destination stgpool configured with an adequate number of scratch volumes allowed (MAXSCRATCH)? One other thing to look at - are all drives and paths online? Ted
Re: MediaW
Are you using collocation? If so, each session will want the same tape. When you Q SESS, you should see which tape each session is trying to mount, or has mounted. -Original Message- From: David E Ehresman [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 10, 2003 2:27 PM To: [EMAIL PROTECTED] Subject: MediaW I have a client node that backs up over the lan to tape. When I do a Q SESS during a backup, there are 3-4 sessions running. One session has a tape mounted but one of the others remains in MediaW. The node has Maximum Mount Points Allowed: 2 defined. The device class has Mount Limit: DRIVES defined. There are empty tape drives available. Any ideas why a tape is not being mounted? TSM server is TSM 5.1.6.3 running on AIX 5.1 64 bit mode. Client is TSM 5.1.5.11 running on Aix 5.1 32 bit mode. David Ehresman
Re: MediaW in Summary Table
From: Adams, Matt (US - Hermitage) [mailto:[EMAIL PROTECTED] We are on TSM server version 5.1.6.0 and I have noticed that the stats for MediaW in the Summary table is incorrect some of the time, but not all of the time. In doing a LAN-free backup, the total time for the backup was 43 minutes. The summary table showed 2143 secs of mediaw time (35min). If you back out the media wait time, that makes the backup throughput rate an astronomical number that I know we are not getting. This issue does not seem to occur on every backup performed. This forum has repeatedly discussed nonreliable data coming from the summary table. Best to stay away from it (or at least ignore it). -- Mark Stapleton ([EMAIL PROTECTED])
MediaW in Summary Table
Hi all, We are on TSM server version 5.1.6.0 and I have noticed that the stats for MediaW in the Summary table is incorrect some of the time, but not all of the time. In doing a LAN-free backup, the total time for the backup was 43 minutes. The summary table showed 2143 secs of mediaw time (35min). If you back out the media wait time, that makes the backup throughput rate an astronomical number that I know we are not getting. This issue does not seem to occur on every backup performed. Anyone else experienced this?? Regards, Matt Adams Tivoli Storage Manager Team Hermitage Site Tech Deloitte and Touche USA LLP - This message (including any attachments) contains confidential information intended for a specific individual and purpose, and is protected by law. - If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited.
Re: MediaW
My storage pools filled up causing by backups to go into a mediaw state waiting for a tape drive to free up. My tape drives were busy migrating data to free space in my storage pools. Is there anyway to force the client sessions out of the mediaw and to start sending data to my disk storage pool again now that they are empty. Try an UPDate Volume to momentarily set the migration destination tape to Readonly, then back to Readwrite after things are going where you want them to. Richard Sims, BU
MediaW
My storage pools filled up causing by backups to go into a mediaw state waiting for a tape drive to free up. My tape drives were busy migrating data to free space in my storage pools. Is there anyway to force the client sessions out of the mediaw and to start sending data to my disk storage pool again now that they are empty. Thanks ***EMAIL DISCLAIMER*** This email and any files transmitted with it may be confidential and are intended solely for the use of th individual or entity to whom they are addressed. If you are not the intended recipient or the individual responsible for delivering the e-mail to the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited. If you have received this e-mail in error, please delete it and notify the sender or contact Health Information Management 312.996.3941.
MediaW slows restore significantly
Has anyone noticed when doing a restore from tape it almost grinds down to a halt when another process/session requests the same tape? We had this happen when an impatient client decided to run 2 restores, where the data on the 2nd restore was on the same tape. The restore slowed from 5MB/s to 30kB/s. When I killed the 2nd restore session it speed up. Similarly, later that evening, the system decided to migrate data to that same volume and it did the same thing. Bug/feature? Cheers, Suad --
MediaW problem
I run my backups at night around 10:30pm. I have around 50 servers doing incrementals concurrently at this time. I noticed that about half are experiencing MEDIAW when I do a "q session". They seem to be waitng to for a tape to load in my library for them to write to but the storage pool that these clients write to first is disk based then when it fills to 90% it should migrate to tape. My question is, if they are suppose to be going to disk storage pool first and it isn't full then why are they waiting for tape. Which is what I am assuming is happening. Thanks Richard ***EMAIL DISCLAIMER** This e-mail and any files transmitted with it may be confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient or the individual responsible for delivering the e-mail to the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited. If you have received this e-mail in error, please delete it and notify the sender or contact Health Information Management (312) 996-3941.
Re: MediaW problem
My question is, if they are suppose to be going to disk storage pool first and it isn't full then why are they waiting for tape. Richard - This is probably due to a large file which will not fit into the space remaining in the disk storage pool such that it must go into the next storage pool in the hierarchy. This is the awkwardness of having a disk storage pool as the entry point for backups, as they tend to overflow anyway and get into the tape dependency we seek to avoid. Richard Sims, BU
Re: MediaW problem
You probably are not initiating your migration processes early enough... might have to take it down to about 80% also if you clients run with client compression, they preallocate based on allocate size and then release any unused portions after the transfer is complete... in other words, if you have 4 sessions sending 25 GB each but they get a good 4/1 compression, you would still need 100 GB in your diskpool to prevent any from going straight to tape. Dwight -Original Message- From: Dearman, Richard [mailto:[EMAIL PROTECTED]] Sent: Wednesday, April 04, 2001 2:01 PM To: [EMAIL PROTECTED] Subject: MediaW problem I run my backups at night around 10:30pm. I have around 50 servers doing incrementals concurrently at this time. I noticed that about half are experiencing MEDIAW when I do a "q session". They seem to be waitng to for a tape to load in my library for them to write to but the storage pool that these clients write to first is disk based then when it fills to 90% it should migrate to tape. My question is, if they are suppose to be going to disk storage pool first and it isn't full then why are they waiting for tape. Which is what I am assuming is happening. Thanks Richard ***EMAIL DISCLAIMER** This e-mail and any files transmitted with it may be confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient or the individual responsible for delivering the e-mail to the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited. If you have received this e-mail in error, please delete it and notify the sender or contact Health Information Management (312) 996-3941.
Re: MediaW problem
Also check your maximum filesize for the storage pool. If the clients contain a file that exceeds the maximum file size then the data will go to the next storage pool which is probably tape. Jim Sporer At 03:51 PM 4/4/2001 -0400, you wrote: Do a "q stg" when this happens and see what the "Pct Util" and Pct Migr" is for the disk storage pools in question. If Pct Util is anywhere near say 90% when a bunch of clients are sending data then some of them WILL start trying to go directly to tape. Why you say? Example: you have 10GB storage pool that is 90% full. 2 clients each start to send a new 1 GB file. Immeadiately the Pct Util would be theoretically 110% as space is allocated for the size of the file, but it will take some time for the file to be completely transfered. So one of them will not be able to get it's space as we can't be over 100% and it will therefore attempt to go directly to tape. Ciomplictate this with many clients and many different size files (And probably a few fudge factors) and you will get these results. We have seen backups start to go to tape when our stg pool was about 90% + full with a handfull of clients sending 2GB files. If you are getting about 90% full anytime then you probably need to either: 1. Spread out the backups so not as many machines are going to the server at once. 2. Increase the disk space available to the disk storage pools that are being filled. David B. Longo System Administrator Health First, Inc. 3300 Fiske Blvd. Rockledge, FL 32955-4305 PH 321.434.5536 Pager 321.634.8230 Fax:321.434.5525 [EMAIL PROTECTED] [EMAIL PROTECTED] 04/04/01 03:01PM I run my backups at night around 10:30pm. I have around 50 servers doing incrementals concurrently at this time. I noticed that about half are experiencing MEDIAW when I do a "q session". They seem to be waitng to for a tape to load in my library for them to write to but the storage pool that these clients write to first is disk based then when it fills to 90% it should migrate to tape. My question is, if they are suppose to be going to disk storage pool first and it isn't full then why are they waiting for tape. Which is what I am assuming is happening. Thanks Richard ***EMAIL DISCLAIMER** This e-mail and any files transmitted with it may be confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient or the individual responsible for delivering the e-mail to the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited. If you have received this e-mail in error, please delete it and notify the sender or contact Health Information Management (312) 996-3941. "MMS health-first.org" made the following annotations on 04/04/01 15:51:47 -- This message is for the named person's use only. It may contain confidential, proprietary, or legally privileged information. No confidentiality or privilege is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy any hard copies of it, and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print, or copy any part of this message if you are not the intended recipient. Health First reserves the right to monitor all e-mail communications through its networks. Any views or opinions expressed in this message are solely those of the individual sender, except (1) where the message states such views or opinions are on behalf of a particular entity; and (2) the sender is authorized by the entity to give such views or opinions. ==
Re: MediaW problem
To confirm that they are waiting on tape do a "q ses f=d" Regards Alex. -Original Message- From: Dearman, Richard [mailto:[EMAIL PROTECTED]] Sent: Wednesday, April 04, 2001 8:01 PM To: [EMAIL PROTECTED] Subject: MediaW problem I run my backups at night around 10:30pm. I have around 50 servers doing incrementals concurrently at this time. I noticed that about half are experiencing MEDIAW when I do a "q session". They seem to be waitng to for a tape to load in my library for them to write to but the storage pool that these clients write to first is disk based then when it fills to 90% it should migrate to tape. My question is, if they are suppose to be going to disk storage pool first and it isn't full then why are they waiting for tape. Which is what I am assuming is happening. Thanks Richard ***EMAIL DISCLAIMER** This e-mail and any files transmitted with it may be confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient or the individual responsible for delivering the e-mail to the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited. If you have received this e-mail in error, please delete it and notify the sender or contact Health Information Management (312) 996-3941.
FW: MediaW, Tape drive availability, Disk STGpool space and under stan ding what TSM is doing....
It happens when you don't have sufficient free space in your disk pool for a client to send its backup data. The client doesn't just wait - TSM automatically tries to switch the client to direct-to-tape operation rather than failing the backup. THEN the client will wait until a mount point (drive) becomes free. (Unless you specify MAXNUMMP=0 in the definition of the node when it is registered.) That appears to be what is happening, since you have a migration for disk pool SERVER in progress, and it's still 80% full. The pool doesn't have to be totally full to trigger this condition, either - just too full for the largest thing the client wants to send. So if there is still some free space in the pool, you may have some clients still backing up successfully to disk, while others grab a tape. However, once a client is queued to wait for a tape (mount point), even if migration does clear some space in the disk pool, the client will not get switched BACK to the disk pool, it will still finish its backup direct to tape. There is nothing really wrong with this; the data is getting where it is supposed to go, and your backups are working and not failing due to the disk pool filling up (that's what TSM is supposed to do for you, yes?) So you can just ignore it! Or, here are some things you can do if you need to PREVENT clients using the tape drives: - Add more space to the disk pool, as you are planning to do - Force a migration of the disk pool down to 0 before most of these backups start, to make sure you have the max available amount of free space. - Turn on compression on the client, so less data comes into the disk pool (Now this has it's own potential drawbacks - check the list archives at www.adsm.org for all the pros cons of using client compression. But not having enough disk pool space is one of the reasons TO use client compression.) - Reschedule your clients a bit so that your data arrival is spread out more. - Set MIGPROCESS=2 on your disk pool, so that when a migration IS triggered, you get two output tapes mounted and clear the pool out twice as fast. Those are just some of the things you can do. But again, you don't have to, your backups are getting done as is! -Original Message- From: Talafous, John G. To: [EMAIL PROTECTED] Sent: 3/4/01 2:56 PM Subject: MediaW, Tape drive availability, Disk STGpool space and understan ding what TSM is doing This is more a TSM internal logic question than anything else. I am seeing times that a TSM server has more tapes mounted than would be necessary for Administrative tasks like migration and backup of storage pools. When and how does this happen? The details Looking at system queries for this particular instance, I can see that there is one migration task with an output tape volume in use and a backup stgpool task waiting for a mount point in devclass 3590-E1A. (Devclass 3590-E1A has a mount limit of DRIVES, which we have four (4).) So, I am thinking that three (3) client tasks are, in fact, utilizing physical tape drives. Notice also that there are twenty-three (23) client tasks with MediaW as the session state. We have not begun sending client data direct to tape because of the limited number of tape drives available. To date, this performance enhancement has not been an issue. What is TSM doing? How can I better understand and provide the best services with the resources I have? Are there TSM classes that deal with this type of concept? Environment is TSM 3.7.2 server on a 3466-C00 (AIX 4.3.2) with a 3494 library containing four (4) 3590-E1A drives. (Soon to be increased by 2 more 3590-E1A drives and 144GB of SSA disk.) Here I include the results of four commands. Query STG, Q PRocesses, Q Mounts, Q SEsssions F=D. Thanks in advance for reviewing this long post... Tivoli Storage Manager Command Line Administrative Interface - Version 4, Release 1, Level 2.0 (C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved. Session established with server FSPHNSM1: AIX-RS/6000 Server Version 3, Release 7, Level 2.0 Server date/time: 03/04/2001 01:00:24 Last access: 03/04/2001 00:30:01 Storage Device Estimated Pct Pct High Low Next Stora- Pool Name Class Name Capacity Util Migr Mig Mig ge Pool (MB) Pct Pct --- -- -- - - --- --- ARCHIVE DISK 81,370.0 48.8 48.3 74 50 ARCHIVE_TA- PE ARCHIVE_CO- 3590-E1A 18,071,904 39.7 PY.7 ARCHIVE_TA- 3590-E1A 17,506,379 40.9 47.0 90 70 PE.0 DIR DISK 9,908.0 21.3 21.3 90 70 DIR_TAPE DIR_COPY3590-E1A200,000.0 0.7 DIR_TAPE3590-E1A 0.0
MediaW, Tape drive availability, Disk STGpool space and understan ding what TSM is doing....
This is more a TSM internal logic question than anything else. I am seeing times that a TSM server has more tapes mounted than would be necessary for Administrative tasks like migration and backup of storage pools. When and how does this happen? The details Looking at system queries for this particular instance, I can see that there is one migration task with an output tape volume in use and a backup stgpool task waiting for a mount point in devclass 3590-E1A. (Devclass 3590-E1A has a mount limit of DRIVES, which we have four (4).) So, I am thinking that three (3) client tasks are, in fact, utilizing physical tape drives. Notice also that there are twenty-three (23) client tasks with MediaW as the session state. We have not begun sending client data direct to tape because of the limited number of tape drives available. To date, this performance enhancement has not been an issue. What is TSM doing? How can I better understand and provide the best services with the resources I have? Are there TSM classes that deal with this type of concept? Environment is TSM 3.7.2 server on a 3466-C00 (AIX 4.3.2) with a 3494 library containing four (4) 3590-E1A drives. (Soon to be increased by 2 more 3590-E1A drives and 144GB of SSA disk.) Here I include the results of four commands. Query STG, Q PRocesses, Q Mounts, Q SEsssions F=D. Thanks in advance for reviewing this long post... Tivoli Storage Manager Command Line Administrative Interface - Version 4, Release 1, Level 2.0 (C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved. Session established with server FSPHNSM1: AIX-RS/6000 Server Version 3, Release 7, Level 2.0 Server date/time: 03/04/2001 01:00:24 Last access: 03/04/2001 00:30:01 Storage Device Estimated Pct Pct High Low Next Stora- Pool Name Class Name Capacity Util Migr Mig Mig ge Pool (MB) Pct Pct --- -- -- - - --- --- ARCHIVE DISK 81,370.0 48.8 48.3 74 50 ARCHIVE_TA- PE ARCHIVE_CO- 3590-E1A 18,071,904 39.7 PY.7 ARCHIVE_TA- 3590-E1A 17,506,379 40.9 47.0 90 70 PE.0 DIR DISK 9,908.0 21.3 21.3 90 70 DIR_TAPE DIR_COPY3590-E1A200,000.0 0.7 DIR_TAPE3590-E1A 0.0 0.0 0.0 90 70 DISKPOOLDISK 0.0 0.0 0.0 90 70 SERVER DISK250,777.0 80.6 79.8 74 50 SERVER_TAPE SERVER_COPY 3590-E1A 23,524,586 34.7 .7 SERVER_TAPE 3590-E1A 24,022,339 34.0 57.0 90 70 .9 WORKSTN DISK 9,231.0 60.5 60.5 90 50 WORKSTN_TA- PE WORKSTN_TA- 3590-E1A 1,290,919. 2.2 4.0 90 70 PE 3 Process Process Description Status Number - 255 MigrationDisk Storage Pool SERVER, Moved Files: 241, Moved Bytes: 141,957,177,344, Unreadable Files: 0, Unreadable Bytes: 0. Current Physical File (bytes): 4,570,263,552 Current output volume: K20181. 257 Backup Storage Pool Primary Pool SERVER, Copy Pool SERVER_COPY, Files Backed Up: 0, Bytes Backed Up: 0, Unreadable Files: 0, Unreadable Bytes: 0. Current Physical File (bytes): 24,576 Waiting for mount point in device class 3590-E1A (13 seconds). ANR8330I 3590 volume K20020 is mounted R/W in drive 3590DRIVE4 (/dev/rmt4), status: IN USE. ANR8330I 3590 volume K20181 is mounted R/W in drive 3590DRIVE2 (/dev/rmt2), status: IN USE. ANR8330I 3590 volume K20065 is mounted R/W in drive 3590DRIVE1 (/dev/rmt1), status: IN USE. ANR8330I 3590 volume K20314 is mounted R/W in drive 3590DRIVE3 (/dev/rmt3), status: IN USE. ANR8334I 4 volumes found. Sess Comm. Sess Wait Bytes Bytes Sess Platform Client Name Media Access Status User NameDate/Time First Data Sent Number Method StateTimeSent Recvd Type
Re: MediaW, Tape drive availability,Disk STGpool space and understan ding what TSM is doing....
ous, John G." wrote: This is more a TSM internal logic question than anything else. I am seeing times that a TSM server has more tapes mounted than would be necessary for Administrative tasks like migration and backup of storage pools. When and how does this happen? The details Looking at system queries for this particular instance, I can see that there is one migration task with an output tape volume in use and a backup stgpool task waiting for a mount point in devclass 3590-E1A. (Devclass 3590-E1A has a mount limit of DRIVES, which we have four (4).) So, I am thinking that three (3) client tasks are, in fact, utilizing physical tape drives. Notice also that there are twenty-three (23) client tasks with MediaW as the session state. We have not begun sending client data direct to tape because of the limited number of tape drives available. To date, this performance enhancement has not been an issue. What is TSM doing? How can I better understand and provide the best services with the resources I have? Are there TSM classes that deal with this type of concept? Environment is TSM 3.7.2 server on a 3466-C00 (AIX 4.3.2) with a 3494 library containing four (4) 3590-E1A drives. (Soon to be increased by 2 more 3590-E1A drives and 144GB of SSA disk.) Here I include the results of four commands. Query STG, Q PRocesses, Q Mounts, Q SEsssions F=D. Thanks in advance for reviewing this long post... Tivoli Storage Manager Command Line Administrative Interface - Version 4, Release 1, Level 2.0 (C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved. Session established with server FSPHNSM1: AIX-RS/6000 Server Version 3, Release 7, Level 2.0 Server date/time: 03/04/2001 01:00:24 Last access: 03/04/2001 00:30:01 Storage Device Estimated Pct Pct High Low Next Stora- Pool Name Class Name Capacity Util Migr Mig Mig ge Pool (MB) Pct Pct --- -- -- - - --- --- ARCHIVE DISK 81,370.0 48.8 48.3 74 50 ARCHIVE_TA- PE ARCHIVE_CO- 3590-E1A 18,071,904 39.7 PY.7 ARCHIVE_TA- 3590-E1A 17,506,379 40.9 47.0 90 70 PE.0 DIR DISK 9,908.0 21.3 21.3 90 70 DIR_TAPE DIR_COPY3590-E1A200,000.0 0.7 DIR_TAPE3590-E1A 0.0 0.0 0.0 90 70 DISKPOOLDISK 0.0 0.0 0.0 90 70 SERVER DISK250,777.0 80.6 79.8 74 50 SERVER_TAPE SERVER_COPY 3590-E1A 23,524,586 34.7 .7 SERVER_TAPE 3590-E1A 24,022,339 34.0 57.0 90 70 .9 WORKSTN DISK 9,231.0 60.5 60.5 90 50 WORKSTN_TA- PE WORKSTN_TA- 3590-E1A 1,290,919. 2.2 4.0 90 70 PE 3 Process Process Description Status Number - 255 MigrationDisk Storage Pool SERVER, Moved Files: 241, Moved Bytes: 141,957,177,344, Unreadable Files: 0, Unreadable Bytes: 0. Current Physical File (bytes): 4,570,263,552 Current output volume: K20181. 257 Backup Storage Pool Primary Pool SERVER, Copy Pool SERVER_COPY, Files Backed Up: 0, Bytes Backed Up: 0, Unreadable Files: 0, Unreadable Bytes: 0. Current Physical File (bytes): 24,576 Waiting for mount point in device class 3590-E1A (13 seconds). ANR8330I 3590 volume K20020 is mounted R/W in drive 3590DRIVE4 (/dev/rmt4), status: IN USE. ANR8330I 3590 volume K20181 is mounted R/W in drive 3590DRIVE2 (/dev/rmt2), status: IN USE. ANR8330I 3590 volume K20065 is mounted R/W in drive 3590DRIVE1 (/dev/rmt1), status: IN USE. ANR8330I 3590 volume K20314 is mounted R/W in drive 3590DRIVE3 (/dev/rmt3), status: IN USE. ANR8334I 4 volumes found. Sess Comm. Sess Wait Bytes Bytes Sess Platform Client Name Media Access Status User NameDate/Time First Data Sent Number Method State