Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Dan Foster

Hot Diggety! Seay, Paul was rumored to have written:
 Ask them where they were on 9-11-2001.  Are they totally brain dead?

Ahhh, so that's what you referred to in passing in the other post.

That's all right, and understandable.

I have a first rate appreciation of this. If you'll allow me to indulge
briefly on a tangentially related (but not completely) issue on this
list, just once...

I used to be a VMS admin. Best, most robust OS that I ever worked with -
probably true for the IBM mainframes but didn't work much with them, alas.
(A little OS/400, DOS/VSE, and one or two other related OSes)

Anyway, come post-9/11, a *lot* of financial firms were in a world of
hurt. The ones who planned and re-tested over and over again, each year,
for an alternate site a good distance away from NYC, was able to reopen
for business only a few days later. Many were based in NJ or about an
hour west/north of NYC... one was even based not too far from home, their
DR site being about 4-5 hours northwest of NYC.

Around this time, I heard that Compaq (company that bought out DEC)
was making a lot of frantic calls all around the country seeking out high
end machines such as the AlphaServer 8400s and VAX 7000s...that had been
discontinued for perhaps 10 years since, because a lot of customers were
suddenly calling in for warranty replacements (under their expensive
support contracts) in NYC and DC -- you can guess what kind of customer
it was in DC. How desperate was Compaq? They were calling up even third
level resellers of used equipment that they would normally never ever think
of talking to.

Compaq was in a nasty hole, because they had run out of set-aside reserve
spares. Fab plants *long* since shut down...they can't just take the
original plans and re-fab, since the engineers no longer there... I'm not
sure how they eventually resolved that... probably offered newer machines to
customers and provided migration assistance at Compaq's cost, is my guess.

But what the bean counters don't realize is that it doesn't take a
catastrophic national event to mean a bad effect on the business bottom
line, which I find unfortunate. Can be all sorts of more 'mundane' (albeit
not very common) events such as that train which burned in a Baltimore
tunnel and closed a part of downtown near Oriole Park at Camden Yards.
My company (used to also own a telco) was personally affected by an homeless
man burning something in a former abandoned railroad tunnel that melted
fiber optics and took out OC-12 to the area for 12+ hours, with a nice
number of servers based out of here.

It doesn't have to be a corporation for a nasty disaster to mean bad
things for their bottom line. I am very well reminded of a colossal failure
at an academic institution almost a decade ago that was a chain of events
ultimately resulting in failure of a critical drive in a RAID-5 array,
and the tapes weren't really usable for recovery...which they found out
the hard way. An entire semester of classwork was effectively disrupted,
with much data lost, before they were finally able to convince DEC to
send out the very best people to recover about 80% off the RAID-5 array
through some custom work. So many classes, projects, research papers, etc.
were affected that it just simply isn't funny. Same place where if the
IBM mainframe ever went down, school was closed for the day. (Happened
only once ever, to best of my knowledge.)

...and that is truly unfortunate, that the people who are actually tasked
to make things happen, like us, understand and appreciate, whereas others
higher up may not share the same view, knowledge, and experience.

In a D/R scenario, it also behooves you to know your power sources, how
they kick in, at what levels, how fast/when, evacuation plans, how to
config PBXes, have emergency equipment handy (eg flashlights), and a million
other details. Hardware that can be quickly hooked up/activated, written
step by step plan nearby, software CDs handy if needed, dry runs done,
backups/restores/app operation verified, and all of this tested once or
twice a year depending on level of need and impact, etc.

Still, I resolve to do my best to do whatever I can realistically do. :)

With that said, I now return you to the normal *SM discussions. ;)
(with the reason for copy stgpools driven home ;) )

-Dan Foster
IP Systems Engineering (IPSE)
Global Crossing Telecommunications



Re: Bad performance... again

2002-06-14 Thread Michael Benjamin

Thanks for that David,

To increase the cache-hit percentage you will need to shutdown TSM.

Backup and edit BUFPOOLSIZE in dsmserv.opt and restart the TSM server.
It's probably worth going through an unloaddb and reload of the database
also to
improve performance. We're looking at doing this as a quarterly procedure.

BUFPOOLSIZE refers to virtual memory, default is probably 4096. There is a
table
in the Admin Guide which recommends increases in BUFPOOLSIZE according to
system memory. I'd recommend being a bit conservative and grow it a bit at a
time
performing a q options and q db f=d to see what's going on with
BUFPOOLSIZE in
relation to cache-hits. You obviously don't want to use up virtual-memory at
peak load
times.

Mike.

 -Original Message-
 From: David Longo [SMTP:[EMAIL PROTECTED]]
 Sent: Friday, June 14, 2002 9:15 AM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Bad performance... again

 Well, I'll take a few shots.

 1.  Network - Have you tried some FTP or similar tests at OS level
 between TSM server and clients to see if Network performance is o.k.?

 2.  Your Atape driver is WAY behind.

 3.  Drive microcode on 3584 is behind (Not as far as Atape though).

 4. There were some noticeable performance problems on AIX (I think
 somewhere between ML08 and 09.  I think they were all fixed at 09).

 5.  On TSM server, what is the cache hit percent output of
 q db f=d?  If it is much less that 98%, the cache needs to be
 increased.  This can effect a lot of TSM server ops.

 5.  You didn't mention how long this server has been in operation
  - was it working fine at one point and went downhill?  Also what
 Disk you have on TSM server and how setup?

 David Longo

  [EMAIL PROTECTED] 06/12/02 10:50AM 
 Hi everybody,


I know this is a subject that comes very often, and that various
 answers
 were already give, but, after searching through the list archives, I am
 still not totally sure of what I should do:

I have a TSM Server that does his backups not quite fast. First, I
 thought of a network problem, but it is a dedicated backup network running
 at 1Gb/s and I only get backups at 10GB/hour. And, the internal server
 operations (reclamation, backup stgpool) are also slow. Right now, I am
 looking at the console a backup stg process which is running for almost 2
 hours and has backed up only 38GB. It is a new pool, so there is no time
 spent searching the new files, and all the data came from one volume.

 My setup is:

 TSM Server 4.2.0.0  (Client wants to upgrade to 5.1)
 AIX 4.3.3 ML9 on an F80 with 2 CPUs
 ATAPE 5.4.2.0

 Storage is:

 IBM3584 with 6 IBM LTO drives. Microcode level is 16E0

 The site belongs to a customer who doesn t like very much applying
 patches.
 Should I try to convince him to upgrade TSM/ATAPE/Microcode? Or is there
 anther issue?



 Thank you in advance for your attencion

 Paul van Dongen



 MMS health-first.org made the following
  annotations on 06/13/02 21:31:29
 --
 
 This message is for the named person's use only.  It may contain
 confidential, proprietary, or legally privileged information.  No
 confidentiality or privilege is waived or lost by any mistransmission.  If
 you receive this message in error, please immediately delete it and all
 copies of it from your system, destroy any hard copies of it, and notify
 the sender.  You must not, directly or indirectly, use, disclose,
 distribute, print, or copy any part of this message if you are not the
 intended recipient.  Health First reserves the right to monitor all e-mail
 communications through its networks.  Any views or opinions expressed in
 this message are solely those of the individual sender, except (1) where
 the message states such views or opinions are on behalf of a particular
 entity;  and (2) the sender is authorized by the entity to give such views
 or opinions.

 ==
 

**
Bunnings Legal Disclaimer:

1)  This document is confidential and may contain legally privileged
information. If you are not the intended recipient, you must not   
 disclose or use the information contained in 
it. If you have
received this e-mail in error, please notify us immediately by
return email and delete the document.

2)  All e-mails sent to and sent from Bunnings Building Supplies are
scanned for content. Any material deemed to contain inappropriate
subject matter will be reported to the e-mail administrator of
all parties concerned.

**



Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Seay, Paul

Actually, our company policy is if you do not put it on a LAN drive share,
it does not get saved, period.  A few of us are trying out the desktop
approach to see if it works for laptops.  So far, so good.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Dan Foster [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 12:27 AM
To: [EMAIL PROTECTED]
Subject: Re: Keeping an handle on client systems' large drives


Hot Diggety! Seay, Paul was rumored to have written:

 What you have to do is revisit what you are saving and put in
 exclude.dirs for all directories that contain software that can be
 rebuilt from a common desktop image (hard drive replacment).  Have
 your users save their documents in specific folders and only back them
 up.  Then they just have to customize their desktop configure their
 node name in the dsm.opt and restore the stuff that is backed up.

 This is the trade-off.

Makes sense. Basic education + cost saving vs expense from a brute force
approach. The trick is to have education that works well for a wide range of
users, with differing expertise, and to also clearly communicate
expectations (if you save anywhere else, you won't get it back!).

Now that sounds like I also have to train them to not just blindly click
whenever an application offers them a default directory (often within app
area) to store documents in.

Perhaps a small data area carved out on the hard drive, like say, 5 GB
partition for user documents as Z: or whatever, and similiarly for other
platforms (/userdocs/user as a symlink from ~user/docs or whatever), to
provide a consistent and easy-to-use area for end user, yet predictable area
for mass-deployed *SM configurations to use.

I'm sure that the IT shop can help out significantly if they're able to
preconfigure these settings within each application before users gets their
hands on the machine. Hard part is when not every place has that luxury,
especially at smaller places where end users may be configuring everything
on their own.

Anyway, the overall education/training approach is definitely cheaper than
having to save everything on the HD, I do agree. ;)

-Dan Foster
IP Systems Engineering (IPSE)
Global Crossing Telecommunications



Re: TSM scheduler falling - urgent

2002-06-14 Thread Rick Harderwijk

Hi Tomas,

I just read your post, and it seems to me we are experiencing the same
problem here. We also run W2K server /SP2 and client 4.2.1.20 and are
experiencing the same problems with the scheduler service suddenly stopping
(I put it on auto-restart), messing up my backup.

Did you ever solve the problem, and how did you do that? I read something
about TCP/IP 100Mbit/10Mbit being the culprit, but I have doubts to whether
that being the problem, since other clients do not experience that problem
(though the other clients have much less data on them...).

Kind regards,

Rick Harderwijk
Systems Administrator
Factotum Media BV
Oosterengweg 44
1212 CN Hilversum
P.O. Box 335
1200 AH Hilversum
The Netherlands
Tel: +31-35-6881166
Fax: +31-35-6881199
E: [EMAIL PROTECTED]




-Oorspronkelijk bericht-
Van: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]Namens Toma9
Hrouda
Verzonden: 9 april 2002 8:08
Aan: [EMAIL PROTECTED]
Onderwerp: TSM scheduler falling - urgent


Hi all,

W2K server, SP2, TSM client 4.2.1.20
TSM client scheduler crashes at various times during backup due to unknown
reasons, sometimes after 20 minutes, sometimes after 10 hours.

There is last part of dsmsched.log:

08-04-2002 20:23:54 Normal File--   344
c$\WINNT\Tasks\ServerCheck.job [Sent]
08-04-2002 20:23:54 Normal File--   254
c$\WINNT\Tasks\settime.job [Sent]
08-04-2002 20:23:55 Preparing System Object - File Replication
ServiceNormal File--15,012 c$\adsm.sys\COMPDB\COMPDBFILE [Sent]

08-04-2002 20:23:55 Successful incremental backup of 'COM+ Database'

08-04-2002 20:24:22 Normal File-- 3,080,112
c$\adsm.sys\EventLog\Application.evt  ** Unsuccessful **
08-04-2002 20:24:22 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:24:23 Normal File--   259,904,299 c$\WINNT\Profiles\All
Users\Documents\DrWatson\user.dmp  ** Unsuccessful **
08-04-2002 20:24:23 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:24:38 ... successful
08-04-2002 20:24:39 ... successful
08-04-2002 20:25:14 Retry # 1  Normal File-- 3,080,112
c$\adsm.sys\EventLog\Application.evt [Sent]
08-04-2002 20:25:15 Normal File--   347,056
c$\adsm.sys\EventLog\Directory Service.evt [Sent]
08-04-2002 20:25:15 Normal File--   178,336
c$\adsm.sys\EventLog\File Replication Service.evt [Sent]
08-04-2002 20:25:59 Normal File--
3,080,188c$\adsm.sys\EventLog\Security.evt [Sent]
08-04-2002 20:27:10 Normal File-- 2,757,584
c$\adsm.sys\EventLog\System.evt [Sent]
 and all is over .. backup crash after 27 minutes

There is contens of dsmerror.log:

08-04-2002 16:56:53 Trying port number 56582
08-04-2002 16:56:53 Obtained new port number on which to listen.
08-04-2002 20:09:53 ANS1228E Sending of object
'c$\IBMDIR\Director\data\esntevt.dat' failed
08-04-2002 20:09:53 ANS4987E Error processing 'c$\IBMDIR\Director\data
\esntevt.dat': the object is in use by another process
08-04-2002 20:11:05 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:11:21 ANS1810E TSM session has been reestablished.
08-04-2002 20:11:41 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:21:47 sessSendVerb: Error sending Verb, rc: -50
08-04-2002 20:21:47 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:21:48 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:22:03 ANS1810E TSM session has been reestablished.
08-04-2002 20:24:22 sessSendVerb: Error sending Verb, rc: -50
08-04-2002 20:24:22 sessSendVerb: Error sending Verb, rc: -50
08-04-2002 20:24:22 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:24:22 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:24:22 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:24:23 ANS1809E Session is lost; initializing session reopen
procedure.
08-04-2002 20:24:37 ANS1810E TSM session has been reestablished.
08-04-2002 20:24:38 ANS1810E TSM session has been reestablished.
 that's all

I also found these entries in Application and System Event Log:

APPLICATION EVENT LOG ***
Event Type: Information
Event Source:   AdsmClientService
Event Category: None
Event ID:   4097
Date:   8.4.2002
Time:   16:56:34
User:   NT AUTHORITY\SYSTEM
Computer:   NT1
Description:
The description for Event ID ( 4097 ) in Source ( AdsmClientService ) cannot
be found. The local computer may not have the necessary registry information
or message DLL files to display messages from a remote computer. The
following information is part of the event: TSM_SCHEDULER halted..

or
Event Type: Information
Event Source:   AdsmClientService
Event Category: None
Event ID:   4103
Date:   8.4.2002
Time:   16:56:37
User:   NT AUTHORITY\SYSTEM
Computer:   NT1
Description:
The description for Event 

Re: Technical question.

2002-06-14 Thread Gianluca Mariani1

this is from the tsm 4.2 Technical Guide Redbook (SG24-6277-00); it should
be your case, if you have W2k TSM servers. otherwise refer to the library
sharing paragraph in the same redbook.problem is, this way you can't
Disaster Recovery; if you need two different physical environments, things
get much more complicated.


7.4 SCSI tape failover (W2K only)

Tivoli Storage Manager supports Microsoft Cluster Services (MSCS) in a
Microsoft
Datacenter Server operating environment. However, MSCS does not support the
failover of
tape devices. When correctly set up, TSM servers in a cluster can now
support SCSI tape
failover over a shared SCSI bus.
The server cluster uses and is limited to two computers. The computers must
be physically
connected to each other and must exclusively share a SCSI bus to which the
tape devices
are attached. When failover occurs, the remaining TSM server issues a SCSI
bus reset
during initialization, which allows the server to acquire the tape devices.
See Tivoli Storage
Manager for NT Administrator's Guide, GC35-0410-01.

and then it goes on.

Cordiali saluti
Gianluca Mariani
 Tech Support SSD
Via Sciangai 53, Roma
 phones : +39(0)659664598
   +393351270554 (mobile)
[EMAIL PROTECTED]



Re: 3590 pricing (used)

2002-06-14 Thread Zlatko Krastev/ACIT

-- It'll be wierd when my 3494 is my low-latency, low-capacity storage
format.

Don't be afraid of this. Magstar is still not dead. Maybe you missed it
but last month's IBM news article was about 1 TB in 3590 not in LTO
cartridge :-)

Zlatko Krastev
IT Consultant



Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Re: 3590 pricing (used)

= On Wed, 12 Jun 2002 17:06:10 -0400, Steve Schaub
[EMAIL PROTECTED] said:

 Has anyone purchased used 3590 equipment recently and would be willing
to
 share a reasonable ballpark dollar amount?  We are running out of room
in
 our 3494 and would like to start converting our S/390 over from 3490 to
 3590.  What would be a good price for an A60 controller and four 3590E1A
or
 B1a (escon) drives?

 Alternately, if I sacrificed my 4) 3590E1A drives from TSM to the
Mainframe
 and bought a separate library for TSM, what would it take to replace
what I
 have (277-J, 218-K of which 119-K are offsite)?

Whatever you do, don't bother buying a new 3494.  Just expand what you've
got.
If you think about it, there's no way you can possibly save cash that way.

If you're willing to take the hit in seek time, then you could go for a
dumber
library: a 3584 with LTO drives will most definitely be fewer dollars per
TB,
fewer square feet of floor per TB, etc.  and the LTO physical standard has
absolutely tremendous upgrade paths.

But if you're sticking with the 3590s for a bit longer (which was our
call)
then just toss the new drives in the existing 3494.

The used market is pretty good.  Lots of folks who don't need the access
speed
are going LTO, so some 3590s are hitting the market.  It'll be wierd when
my
3494 is my low-latency, low-capacity storage format. :)

- Allen S. Rout



Re: Execute OS or TSM command on completion of client schedule

2002-06-14 Thread David E Ehresman

I'm pretty sure you can do this with a Servergraph schedule.  Talk to
the good folks at Servergraph to find out for sure.
http://www.servergraph.com

David Ehresman
University of Louisville



dsmaccnt.log location

2002-06-14 Thread David E Ehresman

Enivronment:  TSM 5.1.0 running on AIX 5.1

In my /etc/profile, I have
 export DSMSERV_ACCOUNTING_DIR=/var/adm/tsm which should cause the
dsmaccnt.log to be created in /var/adm/tsm.  But it continues to be
created in the server install path, /usr/tivoli/tsm/server/bin.  Any
ideas what else I need to do to get dsmaccnt.log created in
/var/adm/tsm?

David



Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Mark Stapleton

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Dan Foster
 Not every site is lucky enough to be able to convince the beancounters
 the merits of having a backup system that keeps up with the needs of
 the end users, even if it means one has to explain doomsday predictions
 on the business bottom line -- they invariably hear that then say Oh,
 pshaw, you're just exaggerating because you want money It sucks
 to be the one that's right ;) And the ones who warns well before a
 nasty event occurs may also be the first one to be fired out of spite
 after something happens and gets the blame for not having prevented it.

There is only one thing that will convince the beancounters that backup
resources must be kept to adequate levels:

one bad day

Put your objections in email, send that email to those who matter, and
*keep* *a* *copy*. Gently (but regularly) remind the powers-that-be that
your backup resources are inadequate.

In the meantime, aggressively filter what is being backed up. An
increasingly large amount of data is going to files with extensions like
.nrg, .wmf, .mp3, .rm, and .gho (my current unfavorite). Don't back 'em up.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MSCE



Re: ADSM 3.1 to 4.2.1 Migration

2002-06-14 Thread Zlatko Krastev/ACIT

Marc,

if you can afford little bit more time I would recommend you to test it
first to ensure it will run smoothly:
1. Get another box capable to run AIX 4.2.1 (it can be hard those days :-)
and put ADSM 3.1 on it.
2. Restore your TSM DB to this new box. Just the DB, there is no need to
attach to it any storage devices. And do not forget to disable schedules
in dsmserv.opt - your test server can try to contact several nodes in
prompted mode.
3. Perform OS and TSM upgrade.
4. Backup upgraded DB and restore it on new production server.
If all this completes successfully you can be sure real upgrade+migration
would be fine. And you will not be in a hurry.
There are no two ways to upgrade - I am sure you cannot put AIX 4.2.1 on
your brand new server :-) Yes, you can put ADSM 3.1 on AIX 4.3 and then
upgrade to TSM but better perform upgrade on the old box and only restore
the DB on the new server. If you are not confident in the success of the
migration perform the upgrade same way - do 1, 2, 3 and 4. Thus in case of
problem/failure you can stay with your old server and try again next
day/week/month.
If this insurance is still not enough - sign a contract with a consultant
:)

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:ADSM 3.1 to 4.2.1 Migration

Hello Again,

Since we are getting a new library we are also getting new server hardware
and going to TSM 4.2.1 (5 will be later).  Therefore, we have a 2 way
upgrade (old software to new software, old server to new server).

I gave TSM support a call and asked what their recommended procedure was
and I found out that they really don't have any, but the tech support guy
who responded gave me a procedure that he says has worked for others.

In our case, we need to upgrade the OS on the old server (still at 4.2.1)
to 4.3.3. Do an update install of TSM on top of what we have now.  Export
the database after it has been transformed by the update/install to tape.

Take the export tape and import it into the new server (along with the old
tape libraries).  Then migrate the data from the old library to the new
library.

Is this what others have done?  I have seen snippets of other things being
done,  but usually it appears that server hardware remained the same.

TIA for all of your comments.

Marc

===
Marc D. Taylor
Research Programmer
Beckman Institute Systems Services
Room 1714
Office:  217-244-7373, Fax:  217-333-8206
[EMAIL PROTECTED]
http://biss.beckman.uiuc.edu



Re: Bad performance... again

2002-06-14 Thread David Longo

No, you don't need to shutdown TSM to change this.
It can be dynamically changed with the SETOPT command!

David Longo

 [EMAIL PROTECTED] 06/14/02 01:35AM 
Thanks for that David,

To increase the cache-hit percentage you will need to shutdown TSM.

Backup and edit BUFPOOLSIZE in dsmserv.opt and restart the TSM server.
It's probably worth going through an unloaddb and reload of the database
also to
improve performance. We're looking at doing this as a quarterly procedure.

BUFPOOLSIZE refers to virtual memory, default is probably 4096. There is a
table
in the Admin Guide which recommends increases in BUFPOOLSIZE according to
system memory. I'd recommend being a bit conservative and grow it a bit at a
time
performing a q options and q db f=d to see what's going on with
BUFPOOLSIZE in
relation to cache-hits. You obviously don't want to use up virtual-memory at
peak load
times.

Mike.

 -Original Message-
 From: David Longo [SMTP:[EMAIL PROTECTED]] 
 Sent: Friday, June 14, 2002 9:15 AM
 To:   [EMAIL PROTECTED] 
 Subject:  Re: Bad performance... again

 Well, I'll take a few shots.

 1.  Network - Have you tried some FTP or similar tests at OS level
 between TSM server and clients to see if Network performance is o.k.?

 2.  Your Atape driver is WAY behind.

 3.  Drive microcode on 3584 is behind (Not as far as Atape though).

 4. There were some noticeable performance problems on AIX (I think
 somewhere between ML08 and 09.  I think they were all fixed at 09).

 5.  On TSM server, what is the cache hit percent output of
 q db f=d?  If it is much less that 98%, the cache needs to be
 increased.  This can effect a lot of TSM server ops.

 5.  You didn't mention how long this server has been in operation
  - was it working fine at one point and went downhill?  Also what
 Disk you have on TSM server and how setup?

 David Longo

  [EMAIL PROTECTED] 06/12/02 10:50AM 
 Hi everybody,


I know this is a subject that comes very often, and that various
 answers
 were already give, but, after searching through the list archives, I am
 still not totally sure of what I should do:

I have a TSM Server that does his backups not quite fast. First, I
 thought of a network problem, but it is a dedicated backup network running
 at 1Gb/s and I only get backups at 10GB/hour. And, the internal server
 operations (reclamation, backup stgpool) are also slow. Right now, I am
 looking at the console a backup stg process which is running for almost 2
 hours and has backed up only 38GB. It is a new pool, so there is no time
 spent searching the new files, and all the data came from one volume.

 My setup is:

 TSM Server 4.2.0.0  (Client wants to upgrade to 5.1)
 AIX 4.3.3 ML9 on an F80 with 2 CPUs
 ATAPE 5.4.2.0

 Storage is:

 IBM3584 with 6 IBM LTO drives. Microcode level is 16E0

 The site belongs to a customer who doesn t like very much applying
 patches.
 Should I try to convince him to upgrade TSM/ATAPE/Microcode? Or is there
 anther issue?



 Thank you in advance for your attencion

 Paul van Dongen



 MMS health-first.org made the following
  annotations on 06/13/02 21:31:29
 --
 
 This message is for the named person's use only.  It may contain
 confidential, proprietary, or legally privileged information.  No
 confidentiality or privilege is waived or lost by any mistransmission.  If
 you receive this message in error, please immediately delete it and all
 copies of it from your system, destroy any hard copies of it, and notify
 the sender.  You must not, directly or indirectly, use, disclose,
 distribute, print, or copy any part of this message if you are not the
 intended recipient.  Health First reserves the right to monitor all e-mail
 communications through its networks.  Any views or opinions expressed in
 this message are solely those of the individual sender, except (1) where
 the message states such views or opinions are on behalf of a particular
 entity;  and (2) the sender is authorized by the entity to give such views
 or opinions.

 ==
 

**
Bunnings Legal Disclaimer:

1)  This document is confidential and may contain legally privileged
information. If you are not the intended recipient, you must not   
 disclose or use the information contained in 
it. If you have
received this e-mail in error, please notify us immediately by
return email and delete the document.

2)  All e-mails sent to and sent from Bunnings Building Supplies are
scanned for content. Any material deemed to contain inappropriate
subject matter will be reported to the e-mail administrator of
all parties concerned.

**


MMS 

Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Kai Hintze

Microsoft Policy Editor.

I hate it personally, because I do know what I am doing, why, and where, but
it does force the default data directories for the great unwashed to be on
the data server. It takes a conscious (and annoying) effort to save
something on your local drive.

- Kai.

-Original Message-
From: Dan Foster
To: [EMAIL PROTECTED]
Sent: 6/13/02 9:24 PM
Subject: Keeping an handle on client systems' large drives

I've always been curious about something.

How do you keep an handle on the fact that commodity PC storage is
growing at a far faster rate than tape capacity/system is?

For example, if I had a small LAN of about 300 PCs -- let's say,
an academic or corporate departmental LAN environment... each
has at least a 40 GB HD, and probably a fair amount of apps and files
on them. In the stores, I see drives up to 160 GB, with even larger
ones on the way!

So let's say, an average of 25 GB utilization per system... a single
full backup would be about 7.5 TB, which is quite a few tapes ;)
Not everybody is using LTO or higher capacity.

So do those sites rely purely on the incrementals to save you? Or
some site specific policy such as tailoring backups to exclude
(let's say) C:\Program Files, or some such...? Just wondering.

Not every site is lucky enough to be able to convince the beancounters
the merits of having a backup system that keeps up with the needs of
the end users, even if it means one has to explain doomsday predictions
on the business bottom line -- they invariably hear that then say Oh,
pshaw, you're just exaggerating because you want money It sucks
to be the one that's right ;) And the ones who warns well before a
nasty event occurs may also be the first one to be fired out of spite
after something happens and gets the blame for not having prevented it.

-Dan Foster
IP Systems Engineering (IPSE)
Global Crossing Telecommunications



DB backups

2002-06-14 Thread Jolliff, Dale

Is anyone doing db backups to disk using devclass type FILE?

I know, it's not necessarily a good thing, but right now we have a db that
is large enough that the log can fill before a full db backup can run.

Before I set this up, I'm wondering if anyone else is doing this.



Re: Bad performance... again

2002-06-14 Thread Mark Brown

Hello,

The BUFFPOOLSIZE can only be 1/2 the real memory you are using. If you
stop the server and make the change and set too high a number for the
buffpool
then your server will generate an error and not start. Its ok though,
just make
the needed adjustment and start your server again.

Mark

David Longo wrote:

 No, you don't need to shutdown TSM to change this.
 It can be dynamically changed with the SETOPT command!

 David Longo

  [EMAIL PROTECTED] 06/14/02 01:35AM 
 Thanks for that David,

 To increase the cache-hit percentage you will need to shutdown TSM.

 Backup and edit BUFPOOLSIZE in dsmserv.opt and restart the TSM server.
 It's probably worth going through an unloaddb and reload of the database
 also to
 improve performance. We're looking at doing this as a quarterly procedure.

 BUFPOOLSIZE refers to virtual memory, default is probably 4096. There is a
 table
 in the Admin Guide which recommends increases in BUFPOOLSIZE according to
 system memory. I'd recommend being a bit conservative and grow it a bit at a
 time
 performing a q options and q db f=d to see what's going on with
 BUFPOOLSIZE in
 relation to cache-hits. You obviously don't want to use up virtual-memory at
 peak load
 times.

 Mike.

  -Original Message-
  From: David Longo [SMTP:[EMAIL PROTECTED]]
  Sent: Friday, June 14, 2002 9:15 AM
  To:   [EMAIL PROTECTED]
  Subject:  Re: Bad performance... again
 
  Well, I'll take a few shots.
 
  1.  Network - Have you tried some FTP or similar tests at OS level
  between TSM server and clients to see if Network performance is o.k.?
 
  2.  Your Atape driver is WAY behind.
 
  3.  Drive microcode on 3584 is behind (Not as far as Atape though).
 
  4. There were some noticeable performance problems on AIX (I think
  somewhere between ML08 and 09.  I think they were all fixed at 09).
 
  5.  On TSM server, what is the cache hit percent output of
  q db f=d?  If it is much less that 98%, the cache needs to be
  increased.  This can effect a lot of TSM server ops.
 
  5.  You didn't mention how long this server has been in operation
   - was it working fine at one point and went downhill?  Also what
  Disk you have on TSM server and how setup?
 
  David Longo
 
   [EMAIL PROTECTED] 06/12/02 10:50AM 
  Hi everybody,
 
 
 I know this is a subject that comes very often, and that various
  answers
  were already give, but, after searching through the list archives, I am
  still not totally sure of what I should do:
 
 I have a TSM Server that does his backups not quite fast. First, I
  thought of a network problem, but it is a dedicated backup network running
  at 1Gb/s and I only get backups at 10GB/hour. And, the internal server
  operations (reclamation, backup stgpool) are also slow. Right now, I am
  looking at the console a backup stg process which is running for almost 2
  hours and has backed up only 38GB. It is a new pool, so there is no time
  spent searching the new files, and all the data came from one volume.
 
  My setup is:
 
  TSM Server 4.2.0.0  (Client wants to upgrade to 5.1)
  AIX 4.3.3 ML9 on an F80 with 2 CPUs
  ATAPE 5.4.2.0
 
  Storage is:
 
  IBM3584 with 6 IBM LTO drives. Microcode level is 16E0
 
  The site belongs to a customer who doesn t like very much applying
  patches.
  Should I try to convince him to upgrade TSM/ATAPE/Microcode? Or is there
  anther issue?
 
 
 
  Thank you in advance for your attencion
 
  Paul van Dongen
 
 
 
  MMS health-first.org made the following
   annotations on 06/13/02 21:31:29
  --
  
  This message is for the named person's use only.  It may contain
  confidential, proprietary, or legally privileged information.  No
  confidentiality or privilege is waived or lost by any mistransmission.  If
  you receive this message in error, please immediately delete it and all
  copies of it from your system, destroy any hard copies of it, and notify
  the sender.  You must not, directly or indirectly, use, disclose,
  distribute, print, or copy any part of this message if you are not the
  intended recipient.  Health First reserves the right to monitor all e-mail
  communications through its networks.  Any views or opinions expressed in
  this message are solely those of the individual sender, except (1) where
  the message states such views or opinions are on behalf of a particular
  entity;  and (2) the sender is authorized by the entity to give such views
  or opinions.
 
  ==
  

 **
 Bunnings Legal Disclaimer:

 1)  This document is confidential and may contain legally privileged
 information. If you are not the intended recipient, you must not 
   disclose or use the information contained 
in it. If you have
 received this 

How Do Exclude SystemObject? Thanks

2002-06-14 Thread Fenstermaker,Bob

We are running  Version 4, Release 2, Level 1.20 of ADSM.

Since we started running Version 4, Release 2, Level 1.20 of ADSM by default
it is backing up
D:\Program Files and D:\Winnt.   System Files

I do not want to backup System Files.

How do I exclude 'System and Boot Files?



Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Prather, Wanda

Mark,

I know about mp3s and we do exclude them; what are :

.nrg, .wmf,  .rm, and .gho?

-Original Message-
From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 8:24 AM
To: [EMAIL PROTECTED]
Subject: Re: Keeping an handle on client systems' large drives


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Dan Foster
 Not every site is lucky enough to be able to convince the beancounters
 the merits of having a backup system that keeps up with the needs of
 the end users, even if it means one has to explain doomsday predictions
 on the business bottom line -- they invariably hear that then say Oh,
 pshaw, you're just exaggerating because you want money It sucks
 to be the one that's right ;) And the ones who warns well before a
 nasty event occurs may also be the first one to be fired out of spite
 after something happens and gets the blame for not having prevented it.

There is only one thing that will convince the beancounters that backup
resources must be kept to adequate levels:

one bad day

Put your objections in email, send that email to those who matter, and
*keep* *a* *copy*. Gently (but regularly) remind the powers-that-be that
your backup resources are inadequate.

In the meantime, aggressively filter what is being backed up. An
increasingly large amount of data is going to files with extensions like
.nrg, .wmf, .mp3, .rm, and .gho (my current unfavorite). Don't back 'em up.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MSCE



Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Prather, Wanda

This is always a trade off between what is
practical-possible-available-affordable and the backup coverage you need.

But I would like to put in a word AGAINST the if they don't save it in the
right place they don't get it backed up philosophy.
I'm not criticizing you guys specifically here, this is just MY point of
view on one backup philosophy issue that resurfaces continually here on
the list.

Partial backups + user rules has always been the accepted solution for IT
support, because backing up everything in the environment is hard, and
it's expensive.  So  backing up just the network drives or just the x
directory is the accepted tradeoff, and you teach your users if you don't
put it on the x directory you won't get it back.

But, really, when that happens, who wins?

If a user spends two days working on a powerpoint presentation, and
accidentally trashes it, and it isn't backed up because they didn't save it
in the right place --who pays, in the long run?  Who are these people, and
why does your company/installation have them working in the first place?

My argument is that if you can afford to lose that person's time, you have
too many people working for you.  Most sites I deal with are trying to run
VERY lean on staff, and especially with engineering and software development
sites, the professionals are VERY EXPENSIVE PEOPLE.

Has anyone in your company ever really figured out what it costs when a
software developer/engineer has to recustomize a workstation with a bunch of
software development tools on it when the hard drive crashes?  Have you ever
tried to rebuild from scratch a workstation that is running multiple
versions of programmer development kits, when you only have backups of the
data files?  Do you know how many hours it takes and how much that person's
time is worth?  What it costs to miss a deadline?

Doesn't productivity matter?  Or are all the staff in your company useless
drudges whose time has no value? (think carefully before answering that one!
:)

HOW DOES IT MAKE ECONOMIC SENSE TO SCRIMP ON BACKUP/RECOVERY SUPPORT, AND
WASTE PEOPLE TIME INSTEAD?

My position is that instead of choosing to educate users to work around our
backup support limitations, we should be EDUCATING MANAGEMENT to actually
LOOK at how important their people time is to the company's welfare.

I do realize that we have come to this state because in too many companies,
IT infrastructure is considered an overhead expense instead of a critical
resource, and IT managers eventaully get beaten down in the budget battles
and eventually give up trying to keep up with organizational growth.

But keep repeating this over and over, to EVERYONE in your installation:

EVERY TIME you buget money to buy storage, YOU MUST INCLUDE the cost of
backing it up.  Period.


Thus endeth my soapbox speech for the day.  Time for lunch..
Wanda Prather



Hot Diggety! Seay, Paul was rumored to have written:

 What you have to do is revisit what you are saving and put in exclude.dirs
 for all directories that contain software that can be rebuilt from a
common
 desktop image (hard drive replacment).  Have your users save their
documents
 in specific folders and only back them up.  Then they just have to
customize
 their desktop configure their node name in the dsm.opt and restore the
stuff
 that is backed up.

 This is the trade-off.

Makes sense. Basic education + cost saving vs expense from a brute force
approach. The trick is to have education that works well for a wide range
of users, with differing expertise, and to also clearly communicate
expectations (if you save anywhere else, you won't get it back!).

Now that sounds like I also have to train them to not just blindly click
whenever an application offers them a default directory (often within app
area) to store documents in.

Perhaps a small data area carved out on the hard drive, like say, 5 GB
partition for user documents as Z: or whatever, and similiarly for other
platforms (/userdocs/user as a symlink from ~user/docs or whatever), to
provide a consistent and easy-to-use area for end user, yet predictable area
for mass-deployed *SM configurations to use.



Re: DB backups

2002-06-14 Thread Mark Stapleton

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Jolliff, Dale
 Is anyone doing db backups to disk using devclass type FILE?

Yes.

 I know, it's not necessarily a good thing, but right now we have a db that
 is large enough that the log can fill before a full db backup can run.

You can run an incremental backup to FILE, which won't take up that much
room.

Actually, the size of the database has nothing to do with the use of the
log. The log is filled with TSM transactions that are waiting to be
committed to the database.

Your best choices for solutions are

1) Increase the size of your log (if possible)
2) Run more than one backup of your database each day (incremental or full)
3) Change your logmode to normal (if it's currently in rollforward mode)

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MSCE



Re: DB backups

2002-06-14 Thread Cook, Dwight E

I don't do it all the time but I've used it for a lot of things...
One was when we were moving an environment from one location to another 15
miles away.
We had two servers but didn't have two libraries (and we had big diskpools
to hold multiple day's backups)
Anyway, I backed the db up to a disk file, ftp'ed it over to the other
server, restored the db over there, brought up the environment, allowed the
clients to start accessing the server, then had IBM move the ATL.
Worked slick as a whistle !

PS If you are going to back the DB up to disk, I'd make sure the disk were
protected by either Raid-5 or mirrored, then I'd want another TSM server to
back that file up to... (or at least another system to FTP it over to)

Dwight

-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 9:02 AM
To: [EMAIL PROTECTED]
Subject: DB backups


Is anyone doing db backups to disk using devclass type FILE?

I know, it's not necessarily a good thing, but right now we have a db that
is large enough that the log can fill before a full db backup can run.

Before I set this up, I'm wondering if anyone else is doing this.



Re: Licensing problems on TSM for Windows 5.1.1.0

2002-06-14 Thread Joerg Pohlmann/CanWest/IBM

I have upgraded from 5.1.0.2 to 5.1.1.0. The code is in the maintenance -
not the patches -  directory on the ftp server, eg. for the windows server
it's at
ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/server/v5r1/WIN/LATEST/


Mark, you are absolutely correct, the mgsyslan.lic, oracle.lic
registrations with number=n works fine. Thanks for the tip. All
nnwhatever.lic files are still in the product directory. I suspect it's
because I have upgraded as opposed to doing a fresh install.

Dava Canan, you can close the PMR.

Joerg Pohlmann
604-535-0452






Mark Stapleton [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
2002-06-13 21:05
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Licensing problems on TSM for Windows 5.1.1.0



From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Dave Canan
  I checked with 3 customers today running 5.1.1. The first was
 running ITSM 5.1.1 for Solaris, the second was ITSM for W2K, and the
third
 was ITSM for AIX. All experienced the same behavior. Sounds like a bug
to
 me also. I will call this in to IBM support.

(5.1.0.2 is the latest released patch to TSM server. Are you referring to
5.1.0.1?)

That's interesting. I've done installs, both new and upgrades, for AIX and
Windows, and licensing has not a problem. Remember that version 5 license
files only come in single access. In other words, there is a mgsyslan.lic
file, but not a 5mgsyslan.lic or 10mgsyslan.lic anymore. If you want to
install, say 35 LAN-based client licenses, you run

reg lic file=mgsyslan.lic number=35

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MSCE



Re: LTO Offerings (was Library Survey)

2002-06-14 Thread Bill Boyer

I installed a Dell 136T library with 3 HP LTO drives with the fibre
attachment. Had some configuration issues that Dell fixed, but for the most
part has run well. Took a while to get an updated TSMSCSI driver that
recognized the drives. You need to configure the library as well as the
drives as being TSM SCSI driver controlled, and not native driver
controlled. They are seeing 1+GB/Min throughput on the drives.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Marc D. Taylor
Sent: Thursday, June 13, 2002 12:37 PM
To: [EMAIL PROTECTED]
Subject: LTO Offerings (was Library Survey)


Hello Again,

Before this thread gets away from me, let me say a few things.

1.  From the email postings to this group it appears that the IBM LTO drive
is what most people use if they are using LTO.  I understand that IBM makes
a fine product and it gives me warm and fuzzies if I had a chance to choose
the IBM LTO drive.  Also from the postings, IBM has had it share of
teething pains in the earlier days of this technology.

2.  I guess I would really like to hear from the people on this list
(whoever you are) who chose a library with the Seagate or the HP LTO drives
and what their experiences have been, good or bad.  If no one on this list
has purchased tape libraries with Seagate or HP LTO drives then that is
telling also.

Sorry if my previous post was not focused enough.

Marc Taylor



Re: DB backups

2002-06-14 Thread Prather, Wanda

I've done that before many times, it works fine.

I think it's a good solution for the situation you describe - if you have
problems with the log filling up, set up the DBBACKUP trigger to fire an
INCREMENTAL, instead of a full, to disk instead of tape.

Then write your FULL to tape when you have time.

-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 10:02 AM
To: [EMAIL PROTECTED]
Subject: DB backups


Is anyone doing db backups to disk using devclass type FILE?

I know, it's not necessarily a good thing, but right now we have a db that
is large enough that the log can fill before a full db backup can run.

Before I set this up, I'm wondering if anyone else is doing this.



Re: Bad performance... again

2002-06-14 Thread Zlatko Krastev/ACIT

-- The site belongs to a customer who doesn´t like very much applying 
patches.

You can apply *maintenance* not a patch by installing 4.2.2.0. You can 
point to the customer that he/she does not stay at AIX 4.3.0 but is using 
4.3.3 to get *improvements*.

-- 4. There were some noticeable performance problems on AIX (I think 
somewhere between ML08 and 09.  I think they were all fixed at 09).

Actually somewhere between ML9 and ML10 :-) There was nice memory leak 
problem in ML9 plus some others *directly* affecting TSM performance. Thus 
you have to upgrade at least:
bos.mp
bos.rte.libc
bos.rte.libpthreads
bos.net.tcp.client
bos.net.tcp.server
ML10 seems good up to now.
Look at post made by Thomas Rupp on 10.04.2002 on thread Big Performance Problem 
after upgrading from AIX TSM Client 4.1 to 4.2. I've learned this hard way but not 
with TSM :-)

-- To increase the cache-hit percentage you will need to shutdown TSM.

WRONG! Just use 'setopt bufpoolsize new value'. This also *appends* a 
option line in dsmserv.opt but does not remove the old one. So you can 
clean up a bit. It is better to issue also 'reset bufpool' to clear stats 
and to get correct DB cache hit %. 
If we talk about LOGPOOLSIZE then yes, you have to change option and 
restart the TSM server.


Zlatko Krastev
IT Consultant





Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: 

Subject:Re: Bad performance... again

Thanks for that David,

To increase the cache-hit percentage you will need to shutdown TSM.

Backup and edit BUFPOOLSIZE in dsmserv.opt and restart the TSM server.
It's probably worth going through an unloaddb and reload of the database
also to
improve performance. We're looking at doing this as a quarterly procedure.

BUFPOOLSIZE refers to virtual memory, default is probably 4096. There is a
table
in the Admin Guide which recommends increases in BUFPOOLSIZE according to
system memory. I'd recommend being a bit conservative and grow it a bit at 
a
time
performing a q options and q db f=d to see what's going on with
BUFPOOLSIZE in
relation to cache-hits. You obviously don't want to use up virtual-memory 
at
peak load
times.

Mike.

 -Original Message-
 From: David Longo [SMTP:[EMAIL PROTECTED]]
 Sent: Friday, June 14, 2002 9:15 AM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Bad performance... again

 Well, I'll take a few shots.

 1.  Network - Have you tried some FTP or similar tests at OS level
 between TSM server and clients to see if Network performance is o.k.?

 2.  Your Atape driver is WAY behind.

 3.  Drive microcode on 3584 is behind (Not as far as Atape though).

 4. There were some noticeable performance problems on AIX (I think
 somewhere between ML08 and 09.  I think they were all fixed at 09).

 5.  On TSM server, what is the cache hit percent output of
 q db f=d?  If it is much less that 98%, the cache needs to be
 increased.  This can effect a lot of TSM server ops.

 5.  You didn't mention how long this server has been in operation
  - was it working fine at one point and went downhill?  Also what
 Disk you have on TSM server and how setup?

 David Longo

  [EMAIL PROTECTED] 06/12/02 10:50AM 
 Hi everybody,


I know this is a subject that comes very often, and that various
 answers
 were already give, but, after searching through the list archives, I am
 still not totally sure of what I should do:

I have a TSM Server that does his backups not quite fast. First, I
 thought of a network problem, but it is a dedicated backup network 
running
 at 1Gb/s and I only get backups at 10GB/hour. And, the internal server
 operations (reclamation, backup stgpool) are also slow. Right now, I am
 looking at the console a backup stg process which is running for almost 
2
 hours and has backed up only 38GB. It is a new pool, so there is no time
 spent searching the new files, and all the data came from one volume.

 My setup is:

 TSM Server 4.2.0.0  (Client wants to upgrade to 5.1)
 AIX 4.3.3 ML9 on an F80 with 2 CPUs
 ATAPE 5.4.2.0

 Storage is:

 IBM3584 with 6 IBM LTO drives. Microcode level is 16E0

 The site belongs to a customer who doesn t like very much applying
 patches.
 Should I try to convince him to upgrade TSM/ATAPE/Microcode? Or is there
 anther issue?



 Thank you in advance for your attencion

 Paul van Dongen



 MMS health-first.org made the following
  annotations on 06/13/02 21:31:29
 
--
 
 This message is for the named person's use only.  It may contain
 confidential, proprietary, or legally privileged information.  No
 confidentiality or privilege is waived or lost by any mistransmission. 
If
 you receive this message in error, please immediately delete it and all
 copies of it from your system, destroy any hard copies of it, and notify
 the sender.  You must not, directly or indirectly, use, disclose,
 

Re: DB backups

2002-06-14 Thread Jolliff, Dale

I have suggested this a couple of times, but have been told they need the
ability to restore the TSM server up to the minute that rollforward
provides.


-Original Message-
From: William Rosette [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 9:49 AM
To: [EMAIL PROTECTED]
Subject: Re: DB backups


We changed our roll forward to NORMAL and have seen very little (once in a
year) crash due to full recovery log.  This may be an option instead of the
type FILE.




Jolliff,
DaleTo: [EMAIL PROTECTED]
[EMAIL PROTECTED]   cc:
OM  Subject: DB backups
Sent by:
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
IST.EDU


06/14/02 10:01
AM
Please respond
to ADSM: Dist
Stor Manager






Is anyone doing db backups to disk using devclass type FILE?

I know, it's not necessarily a good thing, but right now we have a db that
is large enough that the log can fill before a full db backup can run.

Before I set this up, I'm wondering if anyone else is doing this.



Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Joshua S. Bassi

.wmf is the Windows Media File format


--
Joshua S. Bassi
Sr. Solutions Architect @ rs-unix.com
IBM Certified - AIX 5L, SAN, Shark
eServer Systems Expert -pSeries HACMP
Tivoli Certified Consultant- ADSM/TSM
Cell (415) 215-0326

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Prather, Wanda
Sent: Friday, June 14, 2002 8:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Keeping an handle on client systems' large drives

Mark,

I know about mp3s and we do exclude them; what are :

.nrg, .wmf,  .rm, and .gho?

-Original Message-
From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 8:24 AM
To: [EMAIL PROTECTED]
Subject: Re: Keeping an handle on client systems' large drives


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Dan Foster
 Not every site is lucky enough to be able to convince the beancounters
 the merits of having a backup system that keeps up with the needs of
 the end users, even if it means one has to explain doomsday
predictions
 on the business bottom line -- they invariably hear that then say Oh,
 pshaw, you're just exaggerating because you want money It sucks
 to be the one that's right ;) And the ones who warns well before a
 nasty event occurs may also be the first one to be fired out of spite
 after something happens and gets the blame for not having prevented
it.

There is only one thing that will convince the beancounters that backup
resources must be kept to adequate levels:

one bad day

Put your objections in email, send that email to those who matter, and
*keep* *a* *copy*. Gently (but regularly) remind the powers-that-be that
your backup resources are inadequate.

In the meantime, aggressively filter what is being backed up. An
increasingly large amount of data is going to files with extensions like
.nrg, .wmf, .mp3, .rm, and .gho (my current unfavorite). Don't back 'em
up.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MSCE



Re: Slow Reclamation - And a need for speed...

2002-06-14 Thread Bill Boyer

Make sure that none of the 'primary' pool for the data on the offsite
volumes are in a DISK storage pool. Especially the target of the DIRMC
mgmtclass and then copying this data into the offsite pool with the rest of
the data. This is a 'feature', 'working as designed' in the reclamation
processes that if the primary copy of the data is on a DISK random-access
storage pool, then the file(s) are processed 1 at a time and each file is a
transaction. None of the MOVEBATCHSIZE stuff for this process... Do you
maybe have CACHE=YES for the disk storage pool(s)? Even though the data has
been migrated to onsite tape pool, the most primary copy of the data still
could reside in the disk storage pool. For the whole MOVEBATSIZE-stuff to
work, the primary copy needs to be on a sequetial storage pool.

Also, what are you using to control the offsite volume movement? Offsite
volumes won't go back to SCRATCH until you update their access from OFFSITE.
Until then they stay PENDING.

Another misconception is that setting the RECLAMMATION threshold back high
does not actually stop the currently running reclamation process. For onsite
pools, when the current reclamation task ends, no new reclamation tasks
should start back up, especially if you put it back to 100%. For offsite,
since the reclamation task is doing all volumes that were identified when
the process started, it will run to completion...whenever that is. If you
want to actually stop reclamation, set the RECLAIM= back to 100% ( of just
higher than any volumes' reclamation %) and then cancel the process.

Bill Boyer
DSS, Inc.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Miles Purdy
Sent: Thursday, June 13, 2002 9:37 AM
To: [EMAIL PROTECTED]
Subject: Re: Slow Reclamation - And a need for speed...


Also make sure that:
1. you are NOT using 'collocation' on your COPYPOOL.
2. check the  'Delay Period for Volume Reuse'
3. run expire inventory
4. Define two admin schedules to control reclamation. One to start, one to
stop. But set the 'stop' to 99%. I'd set the 'start' to at least 75% - ie. 4
to 1.

ex:
tsm: UNXRq stg offsite f=d

   Storage Pool Name: OFFSITE
   Storage Pool Type: Copy
   Device Class Name: 3580
 Estimated Capacity (MB): 92 287 026.6
Pct Util: 2.8
Pct Migr:
 Pct Logical: 98.5
High Mig Pct:
 Low Mig Pct:
 Migration Delay:
  Migration Continue:
 Migration Processes:
   Next Storage Pool:
Reclaim Storage Pool:
  Maximum Size Threshold:
  Access: Read/Write
 Description: Nisa Offsite Copy Storage Pool
   Overflow Location:
   Cache Migrated Files?:
   Collocate: No
   Reclamation Threshold: 99
 Maximum Scratch Volumes Allowed: 500
   Delay Period for Volume Reuse: 1 Day(s)
  Migration in Progress?:
Amount Migrated (MB):
Elapsed Migration Time (seconds):
Reclamation in Progress?: No
 Volume Being Migrated/Reclaimed:
  Last Update by (administrator): PURDYM
   Last Update Date/Time: 06/12/02   14:00:23
Storage Pool Data Format: Native

tsm: UNXRq sched EXPIRE_INVENTORY t=a f=d

 Schedule Name: EXPIRE_INVENTORY
   Description: Expire server objects which are older than
retention period
   Command: expire inventory du=100
  Priority: 2
   Start Date/Time: 03/06/02   10:15:00
  Duration: 5 Minute(s)
Period: 1 Day(s)
   Day of Week: Weekday
Expiration:
   Active?: Yes
Last Update by (administrator): PURDYM
 Last Update Date/Time: 03/08/02   11:06:43
  Managing profile:


tsm: UNXRq sched START_RECLAIM_OFFSITE t=a f=d

 Schedule Name: START_RECLAIM_OFFSITE
   Description: Reclaim tape space that expire inventory has
created
   Command: upd stg offsite reclaim=89
  Priority: 4
   Start Date/Time: 03/06/02   12:00:00
  Duration: 60 Minute(s)
Period: 1 Day(s)
   Day of Week: Weekday
Expiration:
   Active?: Yes
Last Update by (administrator): PURDYM
 Last Update Date/Time: 06/10/02   11:53:34
  Managing profile:


tsm: UNXRq sched RESET_RECLAIM_OFFSITE t=a f=d

 Schedule Name: RESET_RECLAIM_OFFSITE
   Description: Reset reclaim value
   Command: upd stg offsite reclaim=99
  Priority: 3
   Start Date/Time: 01/04/00   14:00:00
  Duration: 5 Minute(s)
  

Mountablenotinlib problem

2002-06-14 Thread Kelly J. Lipp

TSM V4.2.2.4 on Win2K.  I have a bunch of volumes that are empty, in the
library, readwrite, etc.  A q media command shows them as mountable in
library.  However, if I try to delete these volumes I get a no thanks
volume in mountablenotinlibrary state.  A perusal of the past suggests a
move media volname stg=stgname wherestate=mountalbeno will do it, but that
returns a no volume in that state.

Obviously something is whacked.  Ideas?

Thanks,

Kelly J. Lipp
Storage Solutions Specialists, Inc.
PO Box 51313
Colorado Springs, CO 80949
[EMAIL PROTECTED] or [EMAIL PROTECTED]
www.storsol.com or www.storserver.com
(719)531-5926
Fax: (240)539-7175



creating scripts running outside of TSM - password issue

2002-06-14 Thread Chuck Lam

Hi,

I have TSM 4.1.4 running on AIX 4.3.3.

Whenever I created scripts running outside of TSM, I
needed to hardcode my admin account and its password
in within the TSM command to get it to run.  Although
it is not a problem, because no one else has access to
this TSM server at this point.  It will be a security
issue eventually.  How do you folks getting around
this problem?  Are there any other ways that I do not
know of to get it to run without my hardcoded admin
account's password?

TIA

__
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com



Re: TSM 4.2/AIX setup questions

2002-06-14 Thread Zlatko Krastev/ACIT

All other questions were answered so I would grab only #5 :) and add one
more
5. Firmwares on IBM SSG site are separated for AIX and Windows. Simply
download AIX package, follow the readme and no need to introduce Windoze.

#8: plan carefully your stgpool chain(s).
You may consider some nodes to go to non-collocated pools while others to
use collocated or even filespace-collocated. MOVE NODEDATA is implemented
in v5.1 and is not available in v4.2.x. Thus you can easily merge pools
but cannot separate some node(s) to another pool.

-- If you will be backing up a file larger than the stgpool it may be more
efficient to send it right to tape, this is the general rule.

Actually if a file is larger than the *available space* in the stgpool it
goes direct to next pool. If next one also does not have enough available
space the file goes down the chain again. And if the file is bigger than
the pool you never will find enough available space :-)
Look at Wanda's explanation for maxsize parameter. If a big file hits your
disk pool, fills it near to the top and later smaller file does not find
enough space it would go to the next pool instead of big one.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:TSM 4.2/AIX setup questions

Environment: TSM 4.2.x on an AIX 4.3.3 ML10 server (660-6H1) with a
3584-L32
and a 3584-D32 expansion frame.

In the process of setting it up, which is a luxury that I'll have only
once
to get it right. :) So, some questions (since I am still coming up to
speed
on *SM).

1) Is there any particular reason to set a max file size for a disk
stgpool?
   (Assuming a setup where disk stgpool will migrate to tape stgpool)

2) Should the TSM server have its own stgpool for backing up itself?

3) I've heard mixed things about 358x firmware version 22UD... I think we
   have 18N2 (but not near it right now to confirm), although what I've
   heard about 22UD is generally (but not 100% in agreement) positive.
Stable?

4) Whom is supposed/allowed to upgrade firmware? IBM CE only?

5) The only docs for firmware upgrade references a NT/2000 box and the
   NTUtil application, whereas I'm in an all-UNIX (AIX and Solaris,
although
   I do have a laptop with Linux and Windows XP if need be) environment,
so
   wonder how to upgrade the firmware without Windows if it's even
possible.

6) To *SM, all backups are incrementals (except for the first backup of a
   new client), is my general understanding. Is there a way to force a
full
   backup of a particular client as an one-time operation? I'm guessing
maybe
   not, but thought I might try asking, anyway. :)

7) The biggest single question... I don't have a real good understanding
of
   the purpose of copy stgpools. I've read a lot of documentation --
hundreds
   of pages of multiple docs, re-read, read old adsm-l mail, Google
searches,
   etc... but still just don't quite 'get it'. I can set up HACMP
clusters,
   debug really obscure things, but this eludes me. ;)

   What I want to do is:

   client - TSM server - disk stgpool - (automatically migrate to tape
   based on space utilization of disk stgpool) tape stgpool

   That's the general concept of what I want to achieve. Is a copy stgpool
   really needed, to be attached to either one of the primary stgpools?

   I was under the impression that a copy stgpool was something you wanted
   when you wanted to copy a primary stgpool so that you could send it to
   another stgpool when ready (based on whatever trigger...space, date),
   such as in a disaster recovery scenario?

-Dan Foster
IP Systems Engineering (IPSE)
Global Crossing Telecommunications



Re: creating scripts running outside of TSM - password issue

2002-06-14 Thread Hunley, Ike

I created a TSM ID with operations authority called TSMRPT, password TSMRPT.
It works for me...

-Original Message-
From: Chuck Lam [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 2:44 PM
To: [EMAIL PROTECTED]
Subject: creating scripts running outside of TSM - password issue


Hi,

I have TSM 4.1.4 running on AIX 4.3.3.

Whenever I created scripts running outside of TSM, I
needed to hardcode my admin account and its password
in within the TSM command to get it to run.  Although
it is not a problem, because no one else has access to
this TSM server at this point.  It will be a security
issue eventually.  How do you folks getting around
this problem?  Are there any other ways that I do not
know of to get it to run without my hardcoded admin
account's password?

TIA

__
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com



Blue Cross Blue Shield of Florida, Inc., and its subsidiary and
affiliate companies are not responsible for errors or omissions in this e-mail 
message. Any personal comments made in this e-mail do not reflect the views of Blue 
Cross Blue Shield of Florida, Inc.



db backups to disk files...

2002-06-14 Thread Cook, Dwight E

Just noticed,  if you back up your db to a disk file, at least on AIX 4.3.3
with TSM server 4.2.2.0 the reported file name and the actual file name are
different ! ! !

tsm: TSMUTL01q volhist t=dbb

   Date/Time: 06/14/2002 11:46:20
 Volume Type: BACKUPFULL
   Backup Series: 8
Backup Operation: 0
  Volume Seq: 1
Device Class: FLATFILE
 Volume Name: /usr/adsm/24073180.DBB
 Volume Location:
 Command:


tsm: TSMUTL01

root@tsmutl01/usr/adsm  ls -l
total 165016
-rw---   1 root system   84424780 Jun 14 11:46 24073180.dbb

The difference is in the case of the DBB vs dbb
I've checked and the del volhist t=dbb tod=blah will still properly clear
the old files from disk but why oh why have they done this to us ! ? !

just checked as reported by the actual process

 Process Process Description  Status
  Number
 
-
   2 Database Backup  Full backup: 0 pages of 20571 backed up.
Current
   output volume: /usr/adsm/24077489.DBB.

same thing, upper in one place and lower in the other
root@tsmutl01/usr/adsm  ls -l
total 329928
-rw---   1 root system   84424780 Jun 14 11:46 24073180.dbb
-rw---   1 root system   84432479 Jun 14 12:58 24077489.dbb

just FYI...


Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suit 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Coats, Jack

Wanda,

I agree, but backing up everything just because it is there is a
very
expensive issue.

In most companies, IT in one form or another is 'charged back' to a
client department, but to keep down spending on IT, the decision is made,
either by IT or the client department management, to not backup
everything.

TSM is the only backup solution I have run across that does not
re-backup
things that have not changed, and this is why I support it in deference to
other
backup solutions whenever possible.

As I have told clients I consult for, and both management and
internal
clients I have worked for, is we can back up everything, you just have to
pay
for it.  And when the price tag comes out, they don't like the answer.

The compromise is not normally between IT and the end clients, it is

between the client and the budget.  IT just gets stuck in the middle.

Other options are:
1. Use a central terminal server and thin client desktops
that do not
have or need disks on the desktop. (Sun has/had a Cashing file system that
worked great, Wise sells Winterm terminals that use a central terminal
server,
there is also the Linux Terminal Server Project to do this with open source
software.  It is possible, it works, but lots of folks don't like it for
their
own reasons)

2. I have had some clients where their answer to backups is
to not backup. They put everything on RAID5 systems or mirrors, automate the
checking on
these systems and don't worry about it.  (I do not condone this behavior, I
have
just observed it.)

3. Use an operating system that keeps data backed up.  I
only know of one, and it is not considered an option by most companies.
Check out ATT/Lucent/Bell Labs OS called Plan9
(http://cm.bell-labs.com/plan9dist/).  It has an interesting method where
when you change a file, it is effectively backed up.  And to retrieve it,
just do a change directory to last Tuesday at 3PM if you want to.  ...
Interesting in concept, but probably not practical for most of our
institutions.  BTW, they migrated, like HSM, off to optical media.  I guess
we could emulate this if we had a big HSM system that was used instead of
large disk farms.  But Plan9 was DESIGNED with backup as part of its
architecture from what I can tell.  It was not retrofitted like every other
OS I have seen (including NT, IBM's VM, *NIX(in all its flavors), MVS, MVT,
IBM's mainframe DOS, CRONOS, and others).


A Backup Story:

Once upon a time I worked for a large oil company supporting their
exploration
department as a unix desktop admin.  We purchased some 9G disk drives (huge
in their day) for a few high dollar exploration geophysicists.  We also
installed an tape drive on each of these peoples desktop, with instructions
on how to use it, and who to call if there was a problem, or if they needed
tapes, or handholding, etc.

The drives were pretty reliable.  But we still suggested users put a tape in
at the end of the day and we (as the admins) would provide a script to run
on
their Sun workstations to tar their files off to tape.

As is normal, users ignore their admins (as we ignore our doctors advice
about
eating and drinking sensibly) and a disk died.  We replaced the disk, then
asked
the user for their backup tapes, as we were glad to restore the data for the
user.
The most recent backup was 6 months old.  This $100K/year user almost got
fired
over this, as it contained ALL his previous 6 months of effort.

We did get back about 60% of his data, with a $20K recovery fee from a data
recovery
company.

WHY this story?  He saved the company millions of dollars in data.  Because
it scared
the rest of our user community into asking 'how do I back up my data?' or
'do you
have a tape I could use to backup my data?' etc.  It was just an expensive
way to
get there.

We did do central backups of all computer room based data, databases, etc.
But not
the desktops.  Why? There was no maintenance window we could have agreement
on to
backup the desktops, and if we did, there was insufficient bandwidth to
centrally
back them up.

Another story of it could be done, but we were told it was not worth the $$$
money.

... Time to go change a tape ... Jack

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 10:11 AM
To: [EMAIL PROTECTED]
Subject: Re: Keeping an handle on client systems' large drives


This is always a trade off between what is
practical-possible-available-affordable and the backup coverage you need.

But I would like to put in a word AGAINST the if they don't save it in the
right place they don't get it backed up philosophy.
I'm not criticizing you guys specifically here, this is just MY point of
view on one backup philosophy issue that resurfaces continually here on
the list.

Partial backups + user rules has always been the accepted solution for IT
support, because backing up everything in the environment is 

Re: TDP Domino

2002-06-14 Thread Del Hoobler

Geoff,

They should not be related events at all.
I think you may need to look a little closer at
the Domino server log to find the culprit.

For TDP for Domino...
If the databases are logged databases, they will not be
backed up by TDP for Domino incremental unless the
database instance ID has changed. Things like
compacts and/or other maintenance may change the
database instance ID.
If the databases are not logged, they will not be
backed up by TDP for Domino incremental unless
the meta data date or the data data date changes.
(These dates are stored inside the .NSF file itself and
accessed via Domino API calls.)

From the info you provided, the same number of databases
are being examined each time you run the
INCREMENTAL command... and so the EXCLUDE statements
in the baclient DSM.OPT are not having an effect.

If you really don't trust what's going on, you will
need to examine the Domino server logs before and
after the change in behavior.

Thanks,

Del



Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

- Live your life as an exclamation, not an explanation.
- Strive for excellence, not perfection.



Re: creating scripts running outside of TSM - password issue

2002-06-14 Thread Kauffman, Tom

I've been using a perl module originally put together by Owen Crow back in
the TSM 2.x days. It works with a command-line userid/password, or it can
get an ID and password from a .dsmrc file in the user's home directory. Take
a look at http://www.io.com/~ocrow/ADSM/Adsm.html -- there may be a few
minor tweaks needed, I think Owen got away from ADSM before it became TSM.

Tom Kauffman
NIBCO, Inc

 -Original Message-
 From: Chuck Lam [mailto:[EMAIL PROTECTED]]
 Sent: Friday, June 14, 2002 1:44 PM
 To: [EMAIL PROTECTED]
 Subject: creating scripts running outside of TSM - password issue


 Hi,

 I have TSM 4.1.4 running on AIX 4.3.3.

 Whenever I created scripts running outside of TSM, I
 needed to hardcode my admin account and its password
 in within the TSM command to get it to run.  Although
 it is not a problem, because no one else has access to
 this TSM server at this point.  It will be a security
 issue eventually.  How do you folks getting around
 this problem?  Are there any other ways that I do not
 know of to get it to run without my hardcoded admin
 account's password?

 TIA

 __
 Do You Yahoo!?
 Yahoo! - Official partner of 2002 FIFA World Cup
 http://fifaworldcup.yahoo.com




Re: Keeping an handle on client systems' large drives

2002-06-14 Thread Tyree, David

*.gho are image files produced by the Ghost program from Symantec.
I think *.nrg files are something to do with CD burning programs, something
like an *.iso file.
*.rm is an audio/video file from RealAudio

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 11:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Keeping an handle on client systems' large drives

Mark,

I know about mp3s and we do exclude them; what are :

.nrg, .wmf,  .rm, and .gho?

-Original Message-
From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 8:24 AM
To: [EMAIL PROTECTED]
Subject: Re: Keeping an handle on client systems' large drives


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Dan Foster
 Not every site is lucky enough to be able to convince the beancounters
 the merits of having a backup system that keeps up with the needs of
 the end users, even if it means one has to explain doomsday predictions
 on the business bottom line -- they invariably hear that then say Oh,
 pshaw, you're just exaggerating because you want money It sucks
 to be the one that's right ;) And the ones who warns well before a
 nasty event occurs may also be the first one to be fired out of spite
 after something happens and gets the blame for not having prevented it.

There is only one thing that will convince the beancounters that backup
resources must be kept to adequate levels:

one bad day

Put your objections in email, send that email to those who matter, and
*keep* *a* *copy*. Gently (but regularly) remind the powers-that-be that
your backup resources are inadequate.

In the meantime, aggressively filter what is being backed up. An
increasingly large amount of data is going to files with extensions like
.nrg, .wmf, .mp3, .rm, and .gho (my current unfavorite). Don't back 'em up.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MSCE



Re: good readings

2002-06-14 Thread Jon Adams

I would recommend Getting Started with Tivoli Storage Manager:
Implementation Guide. This an IBM Redbook, id# SG24-5416-01, I believe.  I
still use this frequently for reference on a variety of levels.  Good luck.

Jon Adams
Systems Engineer
Premera Blue Cross
Mountlake Terrace, WA
425-670-5770

-Original Message-
From: Zlatko Krastev/ACIT [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 14, 2002 2:14 PM
To: [EMAIL PROTECTED]
Subject: Re: good readings


Rick,

if you only new to the list it is OK. But if you are also new to TSM I
would highly recommend you to start with SG24-4877 Tivoli Storage
Management Concepts. Read all the chapters relevant to what you have -
must read 1-8 + optional 9-13,30 + any related to your TDP (if any,
chapters 15-24).
After becoming familiar with the terms and functionality in TSM have a
look at SG24-6277 Tivoli Storage Manager v4.2: Technical Guide and (as
Joel suggested) SG24-5416 Getting Started with TSM: Implementation
Guide.
It also might be worth the effort to look at Richard Sims's ADSM/TSM facts
page (http://people.bu.edu/rbs/ADSM.QuickFacts).

Zlatko Krastev
IT Consultant




Please respond to [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:good readings

Hey all,
I am a bit new to the list and am in a position to implement TSM 4.2 and
would like to get some good reading material so that I can better
understand TSM.  Would anyone have some good recommendations?  I have a
couple of the redbooks and have been going through them but if anyone
would
have some further recommendations (websites, books, etc) I would
appreciate
them.

R.



Re: creating scripts running outside of TSM - password issue AN A NSWER

2002-06-14 Thread Seay, Paul

The way I do it is create a script with rwx-- attributes.  This way only
root and myself can execute it or read it.  This is the Windows example:

@echo off
set key=%1
set parmin=%~f2
set rc=99
pushd \program files\tivoli\tsm\baclient\
dsmadmc -id=userid -password=password -displaymode=table %1 %parmin%
set rc=%errorlevel%
popd
echo Return Code from dsmadmc %rc%
set errorlevel=%rc%
Exit

This is the UNIX ksh example:

#!/usr/bin/ksh
key=$1
parmin=$2
rtc=99
dsmadmc -id=userid -password=password -displaymode=table $key $parmin
rtc=$?
echo Return Code from dsmadmc $rtc
exit $rtc

I also have a template version and a perl script that will randomly generate
a new password and issue a change password for itself and update the script
on a regular basis.  The userid is a special userid not the one that I use
on a daily basis.

This is the template:

#!/usr/bin/ksh
# This is the TSM Perl Macros Interface Script
key=$1
parmin=$2
rtc=99
dsmadmc -id=controlm -password=$$temppass -displaymode=table $key $parmin
rtc=$?
echo Return Code from dsmadmc $rtc
exit $rtc

This is the perl script to change the password:

#!/usr/bin/perl
#
# Random Password Generator and Change Facility for TSM Control-M Userid
#
# The purpose of this script is to allow the automation of password changes
# to a dsmadmc batch invocation script and the TSM Server.  The process
# uses a template file exactly like the current file to build the temporary
# file.  A random password is generated with the NGNN format.
#
# As the template is copied to the temporary file the string $$temppass
# is changed to the new 8 character password.
#
# Once everything is staged, an update of the TSM server administrator
# password is issued and the files are cascade renamed.  The current
# production file is renamed to a .old file and the temporary
# file is renamed to be the new production file.
#
# The file can be any type of ascii text file.  However, execution rights
# are not set by this script and must be done externally in the production
# job that executes this script.
#
# Invocation:  tsmadminpw.pl [input template file]
#[current production file]
#[userid of TSM administrator]
#
# Input Arguments:
#
#  [input template file]
#   This is a template file used to build the new production
#   file.  Typically, it is an identical copy of the current
#   production file except for a specification of $$temppass
#   where password substitutions are to be made.
#
#  [current production file]
#   This is the current production file to be replaced by
the
#   updated template file.  The previous version of this
file
#   is renamed to .old.  The current production file must
#   exist and must be a script file to be executed to issue
#   the UPDATE ADMIN command.  Typically, this is the
#   dsmadmc.bat script.
#
#  [userid TSM administrator]
#   This is the userid of the TSM administrator in the
current
#   production file.  It is used to issue the UPDATE ADMIN
#   command.
#
# Fetch the arguements into a list
#
@argin = @ARGV;
$numargs = scalar(@argin);
if ($numargs != 3)
   {print (Input File, Output File, and Userid are Required\n);
exit 99;
}
else
   {$infile = @argin[0];
$outfile = @argin[1];
$userid = @argin[2];
print (Template: , $infile, \n);
print (Output:   , $outfile, \n);
}
if (!-e$infile)
   {print (Template does not exist.\n);
exit 99;
}
if (!-e$outfile)
   {print (Output File does not exist.\n);
exit 99;
}
#
# Setup the pattern arrays
#
@lista = ('B'..'D','F'..'H','J'..'N','P'..'T','V'..'Z');
#
# Build an all consonants 8 character password
#
$x=0;
do {$pw[$x] = @lista[int(rand (21))];
} until $x++ == 7;
#
# Read the template script and write the run script
#
#  1)  Make sure the template script can be read and updated
#  2)  Make sure the output script can be openned in/out
#  3)  Execute the current script with a password update
#  4)  Write the new updated template to the output area
#
# Open the template file
#
if (!open (infile, ''.$infile))
   {print (Template could not be opened);
exit 99;
}
#
# Open the temporary output file
#
if (!open (outfile, ''.$outfile.'.tmp'))
   {print (Temporary output file could not be opened: , $outfile..tmp);
close infile;
exit 99;
}
#
# Copy the records of the Template to the temporary output file
# Change the $$temppass to the new password
#
while (infile)
   {$infile_rec = $_;
$outfile_rec = $infile_rec;
$pws = join('',@pw[0..7]);
$outfile_rec =~ s/\$\$temppass/$pws/;
print outfile ($outfile_rec);
}
close infile;
close outfile;
#
# Build an UPDATE ADMIN command to change the password
#
$command = $outfile.'