Re: MediaW, Tape drive availability,Disk STGpool space and understan ding what TSM is doing....

2001-03-04 Thread Othonas Xixis

Hi John,

First let me comment on yr first question/observation: "I am seeing times that a
TSM server has more tapes mounted than would be necessary for Administrative
tasks like migration and backup of storage pools." answer> TSM (without
migration settings, client schedules, admin schedules, etc.) does not initiate
any jobs that will require mounting tapes and copying data etc. everything is
controlled by the configuration layout.

You might have a few issues in yr installation:
a) Looks like that your environment lacks of sufficient scheduling.

 1) You need to start from your requirements, then evaluate your
 existing resources (libraries, drives, tapes, network, etc.),
 2) then prioritize the activities,
 3) then create the schedules that have been designed on your
 environment,
 4) then run them for 1 to 3 weeks and monitor your server very
 closely,
 5) then go back and make any necessary adjustments to your scheduling
 strategy,
 (you might have to repeat steps 3 to 5 a couple times, until you get
 the best fit setting.

b) Do you control the migration processes with administrative schedules or not ?
looks like that you don't...
What you can do is: create admin schedules that set the migration thresholds to
higher numbers like 89, 90, etc. and when you four tape drives are not busy,
then you set another set of admin schedules that will set the migration
threshold to zero and will start the migration processes.

c) Prioritization. You need to make the decision what activities have higher
priority, and use the scheduling time in conjunction with TSM's priorities
levels to accomplish your goal.

d) In a normal/standard TSM environment TSM operations need to occur in a
sequence that will satisfy yr customized requirements. Below is a basic sequence
of events that works...
1) Run your backups (Incr/Arch/TDPs/etc/). From 4:00-7:00 pm to something
like 1:00-3:00 am.
(You can also have TDP Databases' archive logs, and backups running
through out the whole day, but if you can control and influence... try to have
and set a recovery timeline or next day cutover point, especially if you are
using DRM and have offisite requirements and DR tests... "the works"... ).
2) Activate your migration processes and empty yr disk pools. From 2:00-3:00
am to 4:00-5:00 am.
3) Make copies (backup) of your tape storage pools (onsite/offsite). From
4:00-06:00 am to 07:00-10:00 am.
4) Make a database backup (offsite copy). Around 10:00 am.
5) Run your Disaster Recovery stuff (DRM)... if applicable.
6) Eject the DRM offsite tapes. Around 11:00 am.
7) Make another database backup (snapshot) to keep on site... if applicable.
Around 12:00 pm.
8) Expire inventory. Around 01:00 pm.
9) Start your "controlled" reclamation processes. Around 02:00 pm. If you
have two many storage pools, and you think that 3 to 5 hours is not enough for
your whole reclamation process, you might want to group yr storage pools and
spread them across the week ( 1 group on Mondays, 1 group on Tuesdays, etc.)

This last step will bring you to the 4:00 to 6:00 pm time frame, where you
nightly backups start, go back to step 1.
The above times are imaginary, but they work fine in most of our TSM
installations, again the scheduling times depend on your local customized backup
and restore requirements.

e) Limited resources. At the end of the day... you might realize that you don't
have enough resources to accommodate your requirements, then you have two
options: 1) either go back and modify or rethink yr business requirements, or 2)
add some extra $$$ on next year's budget and add more tape drives to your
library. Since, now you are getting 2 more tape drives... and you will have a
total of 6 tape drives, is a good opportunity to take some time and redesign
your TSM internal scheduling scheme. I always like to have an odd number of tape
drives: like 5 or 7 or 9 etc... because the backup storage pools processes
occupy 2 drives per process, so if you run 2 or 3 processes it will occupy all
your drives (2x2=4, 3x2=6,)... and you will not have any drives available for
any other TSM activities or restore requests... just a thought.

f) Network. The network is another great variable on your scheduling
environment, because if you have the best TSM structure and you have a poor
network then you will still have problems. I hope that you have dealt with the
network issue.

g) TSM and Operating System tuning. From what I see from the NT sessions some
ran for a long time, and is hard to identified by only the below report, but you
might have some tuning issues in your environment, provided that your network is
fast enough.

Finally, I don't think that there are classes that deal with customized
requirements and scheduling issues, however you might find consulting firms that
they can come in for a day or so and help you out.

Cheers.

Othonas


"Talafous, John G." wrote:

> This is mor

MediaW, Tape drive availability, Disk STGpool space and understan ding what TSM is doing....

2001-03-04 Thread Talafous, John G.

This is more a TSM internal logic question than anything else. I am seeing
times that a TSM server has more tapes mounted than would be necessary for
Administrative tasks like migration and backup of storage pools. When and
how does this happen?

The details  Looking at system queries for this particular instance, I
can see that there is one migration task with an output tape volume in use
and a backup stgpool task waiting for a mount point in devclass 3590-E1A.
(Devclass 3590-E1A has a mount limit of DRIVES, which we have four (4).) So,
I am thinking that three (3) client tasks are, in fact, utilizing physical
tape drives. Notice also that there are twenty-three (23) client tasks with
MediaW as the session state. We have not begun sending client data direct to
tape because of the limited number of tape drives available. To date, this
performance enhancement has not been an issue.

What is TSM doing? How can I better understand and provide the best services
with the resources I have? Are there TSM classes that deal with this type of
concept?

Environment is TSM 3.7.2 server on a 3466-C00 (AIX 4.3.2) with a 3494
library containing four (4) 3590-E1A drives. (Soon to be increased by 2 more
3590-E1A drives and 144GB of SSA disk.)

Here I include the results of four commands. Query STG, Q PRocesses, Q
Mounts, Q SEsssions F=D.

Thanks in advance for reviewing this long post...


Tivoli Storage Manager
Command Line Administrative Interface - Version 4, Release 1, Level 2.0
(C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved.

Session established with server FSPHNSM1: AIX-RS/6000
  Server Version 3, Release 7, Level 2.0
  Server date/time: 03/04/2001 01:00:24  Last access: 03/04/2001 00:30:01


Storage Device  Estimated   Pct   Pct High
Low Next Stora-
Pool Name   Class Name   Capacity  Util  Migr  Mig
Mig ge Pool
 (MB)  Pct
Pct
--- -- -- - - 
--- ---
ARCHIVE DISK 81,370.0  48.8  48.3   74
50 ARCHIVE_TA-

PE
ARCHIVE_CO- 3590-E1A   18,071,904  39.7

 PY.7

ARCHIVE_TA- 3590-E1A   17,506,379  40.9  47.0   90
70
 PE.0

DIR DISK  9,908.0  21.3  21.3   90
70 DIR_TAPE
DIR_COPY3590-E1A200,000.0   0.7

DIR_TAPE3590-E1A  0.0   0.0   0.0   90
70
DISKPOOLDISK  0.0   0.0   0.0   90
70
SERVER  DISK250,777.0  80.6  79.8   74
50 SERVER_TAPE
SERVER_COPY 3590-E1A   23,524,586  34.7

   .7

SERVER_TAPE 3590-E1A   24,022,339  34.0  57.0   90
70
   .9

WORKSTN DISK  9,231.0  60.5  60.5   90
50 WORKSTN_TA-

PE
WORKSTN_TA- 3590-E1A   1,290,919.   2.2   4.0   90
70
 PE 3


 Process Process Description  Status

  Number
 
-
 255 MigrationDisk Storage Pool SERVER, Moved Files:
241, Moved
   Bytes: 141,957,177,344, Unreadable
Files: 0,
   Unreadable Bytes: 0. Current Physical
File
   (bytes): 4,570,263,552

   Current output volume: K20181.

 257 Backup Storage Pool  Primary Pool SERVER, Copy Pool
SERVER_COPY, Files
   Backed Up: 0, Bytes Backed Up: 0,
Unreadable
   Files: 0, Unreadable Bytes: 0.
Current Physical
   File (bytes): 24,576

   Waiting for mount point in device
class 3590-E1A
   (13 seconds).

ANR8330I 3590 volume K20020 is mounted R/W in drive 3590DRIVE4 (/dev/rmt4),
status: IN USE.
ANR8330I 3590 volume K20181 is mounted R/W in drive 3590DRIVE2 (/dev/rmt2),
status: IN USE.
ANR8330I 3590 volume K20065 is mounted R/W in drive 3590DRIVE1 (/dev/rmt1),
status: IN USE.
ANR8330I 3590 volume K20314 is mounted R/W in drive 3590DRIVE3 (/dev/rmt3),
status: IN USE.
ANR8334I 4 volumes found.

  Sess Comm.  Sess Wait   Bytes   Bytes Sess
Platform Client Name  Media Access Status
User NameDate/Time First Data Sent
Number Method StateTimeSent   Recvd Type

-- -- -- -- --- --- -