Re: by-hand reclamation question...

2005-02-24 Thread William F. Colwell
The old query content command can provide information which could give the 
answer
if properly massaged.  I can get the information but I am weak on the massaging.

This macro will generate another macro to issue query content commands with 
count=1 and
count=-1 parameters which say to report on the first and last file on the tape.

set sqldisplaymode w
commit
select cast('q con '||volume_name||' count=1' as char(80)) as cmd -
 from volumes -
  where stgpool_name='TP1_COPY2' and status in ('FULL', 'FILLING')  qcon2
select cast('q con '||volume_name||' count=-1' as char(80)) as cmd -
 from volumes -
  where stgpool_name='TP1_COPY2' and status in ('FULL', 'FILLING')  qcon2

then edit qcon2  to remove the headers and run it as a macro, redirecting to a 
3rd file -

tsm: LIBRARY_MANAGERmacro qcon2  qcon3
Output of command redirected to file 'QCON3'

Massage qcon3 and sort it to find duplicate names which indicate a tape chain.
The thing that needs to be massaged is to put the volume name on the output 
line and
unwrap long filenames.

Hope this helps,

Bill Colwell

At 03:28 PM 2/23/2005, you wrote:
On Wed, 23 Feb 2005 [EMAIL PROTECTED] wrote:

 In order to reclaim a given volume, you really need three different ones: The
 target volume, and the volumes adjacent to the target: The one with the
 other half of the aggregate which comes first on the target volume, and the
 one with the other half of the aggregate that comes last.

 I don't think I've been able to find a way to find which volumes these are
 without actually running the expiration or move data and failing the mount.
 Is there a good way to find this information out?

If I understand your scenario correctly, then I think I've run into this.

My not-so-elegant solution was to start an 'audit volume' on the tape that
I intend to run a 'move data' against.  If it's the only tape needed, then
the audit will start up okay and can be cancelled; if there's at least one
other tape needed, audit will complain that a required volume is offsite,
specify the needed volume, then terminate.  Unfortunately, audit only
specifies *one* required offsite volume, so you have to do another audit
after the needed volume is marked onsite in order to see if a 2nd offsite
volume is needed.

I thought about trying to script this to the point where you could at
least know all of the volumes needed before ever inserting any tapes; that
way you could make only one 'offsite trip'.  I think that would involve
marking a volume onsite before inserting it, attempting the audit,
capturing the needed volume name if the audit fails, cancelling the audit
if it doesn't (and maybe dealing with a failed mount?), etc.  I never
actually spent any time coding/testing this, though.

This is admittedly very ugly; I tried getting IBM to give me some SQL that
could *quickly* determine the needed volumes, but was told it's not
possible.  I'm not sure why...presumably it has to do with the it's not
really a relational db, and the SQL interface is only a convenience good
for some things business; all I know is that audit knows *immediately*
which other volume is needed.

Fortunately for me at least, I no longer have a need to do this sort of
thing.

Regards,
Bill

Bill Kelly
Auburn University

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Understanding Management class values

2005-02-03 Thread William F. Colwell
Zoltan,

Retonly=nolimit will keep the last version of a deleted file forever.
If the filenames in the directories are static, then it doesn't matter, but if 
files are
being deleted, retonly=0 will give the fastest expiration from the server.  I 
suggest
a little grace period, say retonly=7.

Hope this helps,

Bill Colwell

At 01:08 PM 2/2/2005, you wrote:
We have been analyzing of one of our nodes backups, to trim it down.

It was decided that certain directories (image backups of the Oracle
instance) only need to retain 14-days worth of daily backupsnothing
more.  The Oracle backups are taken daily, reusing the same files/names.

The following MC values have been suggested.  Would this do the trick ?

verexists
14

retextra
Nolimit

verdeleted
1

retonly
Nolimit

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Need to save permanent copy of all files currently being stored

2004-12-01 Thread William F. Colwell
ITSM has announced a special version of tsm to meet regulatory needs.
I forget the exact details, I think there is an extra charge.

A query license output includes these 2 lines -

  Is Tivoli Storage Manager for Data Retention in use ?: No
Is Tivoli Storage Manager for Data Retention licensed ?: No

Hope this helps,

Bill Colwell

At 11:50 AM 12/1/2004, you wrote:
On Dec 1, 2004, at 11:13 AM, Prather, Wanda wrote:

Mark's right, there is no GOOD way to do this.
You can't create an archive with data that is already on TSM tapes,
except as a backupset, which can be a pain to track and can't be
browsed. ...

I'll also add that the lack of a simple way to preserve data, as when
a sudden regulatory or emergency requirement arises, is an, um,
opportunity for a product feature (rather than a shortcoming :-).

A further complication in the poster's need is the ability to capture
Inactive files, which a Backupset can't do.

A partial, but perhaps satisfactory, measure might be to temporarily
create a preservation file system (name appropriately), perform a
-latest restoral of the subject directory into it, and perform a TSM
Archive on that, whereafter the file system can be disposed of.  This
will at least capture all of the object names which are and have been
in that directory, in their most recent versions.

  Richard Sims http://people.bu.edu/rbs/ADSM.QuickFacts

  Think different.   -- Apple

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Need to save permanent copy of all files currently being stored

2004-12-01 Thread William F. Colwell
Hi Richard,

yes you can suspend deletion.  A bullet on the page says so.
But you can't  add this feature on to an existing server, so it won't
help the original poster.  See the redbook at
http://www-1.ibm.com/support/docview.wss?uid=rdb2sg24709800
for an architectural overview.  It only takes data from the api and
needs a content manager type product to front-end it.

- bill




At 01:49 PM 12/1/2004, you wrote:
On Dec 1, 2004, at 12:18 PM, William F. Colwell wrote:

ITSM has announced a special version of tsm to meet regulatory needs.
I forget the exact details, I think there is an extra charge.

A query license output includes these 2 lines -

  Is Tivoli Storage Manager for Data Retention in use ?: No
Is Tivoli Storage Manager for Data Retention licensed ?: No

Hi, Bill - The web page is
 http://www.ibm.com/software/tivoli/products/storage-mgr-data-reten/
My reading of it is that it principally just guarantees that the data
won't be deleteable from TSM storage until it expires per retention
policies - but I don't perceive any capability for freezing data, per
se.
At expiration time, the data goes away, as usual.

  Richard Sims

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Oracle TDP and RMAN

2004-11-22 Thread William F. Colwell
Charles,

In addition to all the other good answers to this post, you can add this to 
your argument
with the dba's.  TSM is NOT a tape management system in the classic sense of 
that type of
product.  Classic tape management systems returns a tape to scratch after some 
policy driven number
of days.  TSM only returns a tape to scratch when it is logically empty.

The rman page 85 note simply doesn't apply to TSM.

Hope this helps,

Bill Colwell

At 05:02 PM 11/19/2004, you wrote:
In working with our Oracle DBA's they feel TSM should manage the RMAN Oracle 
Retentions etc...   Is there a Preferred method?  If so why?


Here's one of our DBA's response when I told them my understanding is that 
RMAN usually manages the Backup Retentions etc

Note on page 85 of the RMAN Backup and Recovery Handbook it says:

If you are using a tape management system, it may have its own retention 
policy.  If the tape management system's retention policy is in conflict with 
the backup retention policy you have defined in RMAN, the tape management 
system's retention policy will take precedence and your ability to recovery a 
backup will be in jeopardy.

That is why I was leaving it up to the Tivoli retention policy to be used 
instead of RMAN retention policies. This seems to be in conflict with the 
comment about TSM having a dumb repository.


Thanks for you thoughts...

Regards,

Charles

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: TSM and SATA Disk Pools

2004-11-12 Thread William F. Colwell
Charles,

See http://www.redbooks.ibm.com/abstracts/tips0458.html?Open

This was in a recent IBM redbooks newsletter.  It discusses SATA performance
and to me it says that the tsm backup diskpool is not a good use for SATA.
Sequential volumes on SATA may be ok.

Hope this helps,

Bill
At 10:21 AM 11/12/2004, you wrote:
Been asking lots of questions lately.  ;-)


We recently have put our TSM Disk Backup Pools on Clarrion SATA.
The TSM Server is being presented as 600GB SATA Chunks
Our Aix Admin has put a Raw logical over two 600GB Chunks to create a 
 1.2TB Raw Logical Volume

Right now we are seeing Tape migrations @ about 4GB in 6hrs, where before on 
EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to share 
their TSM SATA Diskpool layout and or tips we would much appreciate it!!!


TSM Env
AIX 5.2
TSM 5.2.4 (64bit)
p630 4x4
8x3592 FC Drives

Regards,

Charles

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: strange ANR1405W?I have 22 scratch types and MAXACRATCH=500

2004-11-04 Thread William F. Colwell
Hi Pavel,

I just went thru this too. The support center pointed me in the right direction, here 
is
their response.

This is xxx x with IBM TSM Support. I am responding to you regarding PMR 
n.xxx. I saw what you were referring to w/the number of scratch tapes that show up 
in the q libvol output. I understand that this error appears to be related to the 
change back to standard time, but resarech of the ANR1405W error the actlog took me to 
the following tech doc-

DCF Document ID: 1188134 - IBM Tivoli Storage Manager: After upgrade to TSM Server 
5.2.3.0 the message ANR1405W appears

Problem Desc:   When using a library LTO2 and a device class configured with 
format=ULTRIUM2C, after an upgrade to 5.2.3.0 the following error message appears: 
ANR1405W Scratch volume mount request denied - no scratch volume available Even if 
there are scratch volumes available.

Solution: Use in the device class format=drives and the error message does not appear 
again.


Please update your server to reflect this change and try it to verify the fix.

If you have servers that are clients of the library managing server, do the update 
quickly in all servers
or you will get 100's of messages complaining about a mismatch.

Hope this helps,

Bill Colwell

At 05:26 AM 11/4/2004, you wrote:
Hi all,

I have updates TSM Server 5.2.2 to 5.2.3.2 on Windows2000 Server SP4,
IBM ULTRIUM ULT3582 Library with 2 LTO3580 Drives.Connection between TSM Server 
computer and Library is over fiber net.
my problem seems insoluble to me..!?

I've labeled and checked in my new Ultrium2 types into my new Library one by one from 
entry/exit (bulk) port:

label libvolume lb2.1.0.3 checkin=scratch search=bulk overwrite=no 
labelsource=barcode

after that I have 22 scratch types in my library:

tsm: HEB-BKR-TSM-01_SERVER1q libv f=d

Library Name Volume Name Status Owner Last Use Home Device Cleanings Left Media Type
Element Type
 --- -- -- - --- -- 
-- --
LB2.1.0.3 FBZ016L1 Scratch 4,107 LTO 387
LB2.1.0.3 FBZ017L1 Scratch 4,106 LTO 387
LB2.1.0.3 FBZ018L1 Scratch 4,103 LTO 387
LB2.1.0.3 FBZ019L1 Scratch 4,104 LTO 387
LB2.1.0.3 FBZ020L1 Scratch 4,105 LTO 387
LB2.1.0.3 FBZ021L1 Scratch 4,096 LTO 387
LB2.1.0.3 FBZ022L1 Scratch 4,099 LTO 387
LB2.1.0.3 FBZ023L1 Scratch 4,098 LTO 387
LB2.1.0.3 FBZ024L1 Scratch 4,097 LTO 387
LB2.1.0.3 FBZ025L1 Scratch 4,100 LTO 387
LB2.1.0.3 FBZ026L1 Scratch 4,101 LTO 387
LB2.1.0.3 FBZ027L1 Scratch 4,102 LTO 387
LB2.1.0.3 FBZ028L1 Scratch 4,108 LTO 387
LB2.1.0.3 FBZ029L1 Scratch 4,112 LTO 387
LB2.1.0.3 FBZ030L1 Scratch 4,113 LTO 387
LB2.1.0.3 FBZ031L1 Scratch 4,114 LTO 387
LB2.1.0.3 FBZ032L1 Scratch 4,111 LTO 387
LB2.1.0.3 FBZ033L1 Scratch 4,110 LTO 387
LB2.1.0.3 FBZ034L1 Scratch 4,115 LTO 387
LB2.1.0.3 FBZ035L1 Scratch 4,116 LTO 387
LB2.1.0.3 FBZ036L1 Scratch 4,117 LTO 387


my LTOPOOL1 has MAXSCRATCH ALLOWED 500 types

The big problem occurred when I've tried to make a DB backup, or when I've tried to 
migrate my primary storage pool to LTOPOOL ?!?
No scratch ?! But why ?!?:

11/01/2004 15:39:32 ANR2017I Administrator ADMIN issued command: BACKUP DB
devclass=ltoclass1 type=full (SESSION: 1)
11/01/2004 15:39:32 ANR0984I Process 10 for DATABASE BACKUP started in the
BACKGROUND at 15:39:32. (SESSION: 1, PROCESS: 10)
11/01/2004 15:39:32 ANR2280I Full database backup started as process 10.
(SESSION: 1, PROCESS: 10)
11/01/2004 15:39:33 ANR1405W Scratch volume mount request denied - no scratch
volume available. (SESSION: 1, PROCESS: 10)
11/01/2004 15:39:33 ANR4578E Database backup/restore terminated - required
volume was not mounted. (SESSION: 1, PROCESS: 10)
11/01/2004 15:39:33 ANR0985I Process 10 for DATABASE BACKUP running in the
BACKGROUND completed with completion state FAILURE at
15:39:33. (SESSION: 1, PROCESS: 10)



thank you in advance for your help
best regards
pavel


-
http://kinomania.gbg.bg - Cked`ire Jhmnl`mh  2004!

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Hey Big Blue....Why not MOVE DATA across copy pools?!?

2004-06-10 Thread William F. Colwell
At 07:56 AM 6/10/2004, you wrote:
I already performed the move data from the primary pool to another primary
pool. I guess TSM cant figure out that the primary copies are in a
different primary storage pool than where they were origianlly backed up
from.

A   DELETE FILES node filespace STG=stgpool  sure would be niceor a
DELETE NODEDATA STG=stgpool.   (sigh)

No!  It would not be nice - unless you like shooting yourself in the foot.
Anything that lets you alter a backup destroys the integrity of the backup.
Think of your auditors - if they understood how good tsm is, this command
would destroy all their confidence.

If you have gone to the trouble of collocating the primary pool with move nodedata
commands, you can collocate the offsite pool by defining a new copypool and backing up 
to
it.  After the new pool is all offsite, delete the volumes in the old copypool with 
'discarddata=yes'.

I really hope this helps,

Bill Colwell




Dave Nicholson
Whirlpool Corporation





Stapleton, Mark [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06/09/2004 10:44 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Hey Big BlueWhy not MOVE DATA across copy pools?!?


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf Of David Nicholson
I tried a MOVE DATA for offsite data...it did not go after the
data in the primary pool...it wanted to mount the volume in the copy
pool...which was offsite.
Am I missing something?

Something is wrong. MOVE DATA will only call for offsite volumes if the
required files are not available in the primary pool. Do you have
volumes in unavailable status?

--
Mark Stapleton

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: [Fwd: v5.2.2.4 patch on Solaris]

2004-04-14 Thread William F. Colwell
Gretchen,

Hello!

I just tried installing using the -R option and it worked fine.

I install tsm to the default directory and run the library manager from it.
Then I install to other directories and run server in those; I currently have 4
alternate installations.

The command I used is -

pkgadd -a /opt/tsmmisc/adminfile -R /opt/tsminst/1a -d $IN

where $IN is the directory with the packages in it.  I get prompted
to enter a base directory.

Hope this helps,

Bill Colwell
617 258 1550



At 01:30 PM 4/14/2004, you wrote:
To shed more light on this situation, it looks like it depends on
whether or not you do a pkgadd with the '-R' option (which we do).
The license check is only looking for the license file (status.dat)
in the BASEDIR (default is /opt) directory. Can't change either
the checkinstall file or the pkginfo file because the file sizes
and file checksums are checked as well. Changing the installation
from /var/local/opt to /opt isn't an option.

I would then expect this patch to install successfully for
default installation paths (/opt) and fail for anything else.
So my question has changed somewhat, has anyone been able to
install the Solaris v5.2.2.4 server patch in a directory other
than the default?

 Original Message 
Subject: v5.2.2.4 patch on Solaris
Date: Wed, 14 Apr 2004 13:12:45 -0400
From: Gretchen L. Thiele [EMAIL PROTECTED]
To: ADSM: Dist Stor Manager [EMAIL PROTECTED]

Has anybody successfully installed this patch yet (yup, I realize that
it just came out last night)? I was upgrading from v5.2.2.3 to v5.2.2.4,
when it gave me the license hasn't been accepted yet message. It
further instructs to remove the partial install, install v5.2.2.0
(which I did ages ago), accept the license and then apply the patch.
No go, even going back to v5.2.2.0 and then forward to v5.2.2.4.
Bad package assembly? The README says that it should install over a
v5.2.2.x base where the v5.2.2.0 license has been accepted. By the
way, this is Solaris 9.

Gretchen Thiele
Princeton University

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: [Fwd: v5.2.2.4 patch on Solaris]

2004-04-14 Thread William F. Colwell
Gretchen,

Here is the adminfile, which is not the original but an altered copy  -

/opt/tsmmisc more adminfile
#ident  @(#)default1.4 92/12/23 SMI   /* SVr4.0  1.5.2.1  */
mail=
instance=overwrite
partial=ask
runlevel=ask
idepend=ask
rdepend=ask
space=ask
setuid=ask
conflict=nocheck
action=ask
basedir=ask

In my 'a' instance, where I just installed 5224, for status.dat I have -

/opt/tsm1a find . -name status.dat
./tivoli/tsm/server/license/status.dat
/opt/tsm1a more tivoli/tsm/server/license/status.dat
#Wed Feb 25 12:01:35 EST 2004
Status=9
/opt/tsm1a 

In the default instance, I have -

/opt/tivoli find . -name status.dat
./tsm/server/license/status.dat
/opt/tivoli more tsm/server/license/status.dat
#Thu Mar 04 10:26:21 EST 2004
Status=9
/opt/tivoli 

My guess (duh) is that Status=9 is the key thing.


- bill


At 05:45 PM 4/14/2004, you wrote:
Hi Bill,

Where is your status.dat file? I suspect that since you used the
default install first, it is in /opt/tivoli/tsm/server/license - since
that
is where the preinstall file looks for it. I use pretty much the
same command, although without the -a option. What do you
have in your admin file?

Regards,
Gretchen

On Apr 14, 2004, at 4:58 PM, William F. Colwell wrote:

I install tsm to the default directory and run the library manager
from it.
Then I install to other directories and run server in those; I
currently have 4
alternate installations.

The command I used is -

pkgadd -a /opt/tsmmisc/adminfile -R /opt/tsminst/1a -d $IN

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: TSM license audit not equal number of nodes

2004-03-10 Thread William F. Colwell
David,

IBM still does charge by node if you say the node is not a server.

It is up to us, the customers, to characterize the machines to be backed up
and then buy the proper number and kind of licenses.

If you say a machine is a single user desktop, even if it has 2 processors and
has more raw power than some of your servers, you need just 1 desktop license which
is about a fifth of the price of a server license.

The TSM pricing model is now function based.  The license to backup a server is more
expensive because the value to you is greater than the value of an end user desktop 
backup.

Hope this helps,

Bill Colwell


At 08:04 AM 3/10/2004, you wrote:
Nancy,

As far as I can tell it no longer matters what the TSM software thinks
you have as far as number of nodes; just tell it you have enough to make
it shut up.

The reason it no longer matters is that IBM no longer charges you by
node but by processor and the TSM software auditing component does not
know about processors.  TSM software auditing is a relict of a simpler
day.

David

 [EMAIL PROTECTED] 3/8/2004 5:30:52 PM 
We are in the process of ordering a TSM upgrade (from 4.1 to 5.1 or
5.2)
and are having pricing issues. I ran a license audit to see what we
are
using now and it showed 108 Managed System for LAN licenses in use
(of
200), but we only have 78 nodes registered. I do not understand the
discrepancy. Can anyone explain it to me?


Nancy Reeves
Technical Support, Wichita State University
[EMAIL PROTECTED]  316-978-3860

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: TSM license audit not equal number of nodes

2004-03-10 Thread William F. Colwell
David, a little of my history with tsm is in order.

I have been with it since v1 r1.  Back then there wasn't any audit license command or
any other tool to tell me what the usage was.  I asked the salesman how to figure it
out and he had no clue either.

I developed my own reports using db2 (tsm didn't have sql until v3) to get a count
of nodes in use and to license accordingly.

Consequently, I have never relied on the audit license table;  I set the mansyslan 
count to
 to keep it from whining at me.  I have always relied on outside info to order 
licenses.

Basically I agree with you; the audit license command does not give meaningful
information, especially since it can't tell which of the servers clients are themselves
servers and how many processors are in each.

- bill

At 12:25 PM 3/10/2004, you wrote:
And none of what you writes makes the TSM software audit of licenses
meaningful.  You have to know OUTSIDE of what TSM can tell who which
nodes are desktops and which are servers and how many processors are in
the server machines.  Then tell TSM uopir are licensed for enough nodes
to make it shut up because there is no longer a meaninful mapping
between what the software considers a license and what IBM marketing
considers a license.

david

 [EMAIL PROTECTED] 3/10/2004 10:23:01 AM 
David,

IBM still does charge by node if you say the node is not a server.

It is up to us, the customers, to characterize the machines to be
backed up
and then buy the proper number and kind of licenses.

If you say a machine is a single user desktop, even if it has 2
processors and
has more raw power than some of your servers, you need just 1 desktop
license which
is about a fifth of the price of a server license.

The TSM pricing model is now function based.  The license to backup a
server is more
expensive because the value to you is greater than the value of an end
user desktop backup.

Hope this helps,

Bill Colwell


At 08:04 AM 3/10/2004, you wrote:
Nancy,

As far as I can tell it no longer matters what the TSM software
thinks
you have as far as number of nodes; just tell it you have enough to
make
it shut up.

The reason it no longer matters is that IBM no longer charges you by
node but by processor and the TSM software auditing component does
not
know about processors.  TSM software auditing is a relict of a
simpler
day.

David

 [EMAIL PROTECTED] 3/8/2004 5:30:52 PM 
We are in the process of ordering a TSM upgrade (from 4.1 to 5.1 or
5.2)
and are having pricing issues. I ran a license audit to see what we
are
using now and it showed 108 Managed System for LAN licenses in use
(of
200), but we only have 78 nodes registered. I do not understand the
discrepancy. Can anyone explain it to me?


Nancy Reeves
Technical Support, Wichita State University
[EMAIL PROTECTED]  316-978-3860

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: TSM 5.2.2.0 and nodes

2004-03-03 Thread William F. Colwell
Robert,

did you change the tsm server name during the upgrade?  This will cause
password prompting because the password is stored in the windows registry
in a key with the name of the server.

In regedit, go to hkey_local_machine\software\ibm and drill down to
see what I mean.

Bill

At 08:17 AM 3/3/2004, you wrote:
Hi to all

This week I upgrade my TSM server version from 5.1.8.0 to 5.2.2.0 and my O.S
from windows NT4 to Windows 20003. Everything went O.K but ….

Each node I connected to the server  ask for the password (despite of
passwordacccess generate in my dsm.opt's) just the first time .

Of course then before the upgrade no such behavior ………

Next week I will upgrade my second TSM server 5.1.8 to 5.2.2.0 on my AIX
machine version 4.3.3.0 , there I have more than 100 client nodes and I
figure how to bypass this problem.

Any ideas 

Regards Robert Ouzen
E-mail:  [EMAIL PROTECTED] 

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma. 


Getting to 5221 requires installing 5220 first.

2004-02-25 Thread William F. Colwell
Hello,

I have a sun server at 5202.  I tried to install 5221, but the install script said I 
had
to accept the license in 5220.  So it appears that unlike most other upgrades
where I can jump from almost any older version to the latest version, you can't do this
for 5221 and later.  You must go to 5220 first.  I think this has something to do with
the new data retention variant of tsm.

When I stopped complaining and did what the 5221 install said, that is install 5220,
I could then remove 5220 and install 5221.

I can't see anything in the install readme about this.  Here is the 5221 readme, it 
says I
can install it over 5200.

Tivoli Storage Manager/Sun Server
Licensed Materials - Property of IBM
(C) Copyright IBM Corporation 1990, 2002. All rights reserved.

Service Level
This is a pkgadd installable PATCH level that will install as
level 5.2.2.1 for packages:
 TIVsmS (base server package)
 TIVsmSdev (TSM device drivers)
The tar file TSMSRV5221_SUN.tar.Z contains:
 TSMSRV5221_SUN.README.INSTALL   This file
 TSMSRV5221_SUN.README   The Server README
 TSMSRV5221_SUN.README.DEV   The TSM Device Driver README
 TIVsmS/ Server package
 TIVsmSdev/  TSM device drivers
|-- ATTENTION |
| This is a service LEVEL can be installed over the following versions:   |
| 5.2.0.0+.   |
|-|



--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: No status updates for long-running processes?

2004-02-20 Thread William F. Colwell
Allen,

This is a bug, the report title is --

IC39306: DELETE FILESPACE COMMAND DOES NOT PROPERLY SHOW HOW MANY BACKED UP OBJECTS 
ARE BEING DELETED.


- bill

At 02:45 PM 2/19/2004, you wrote:
I'm experimenting with TSM 5.2.2.1 on AIX 5.2; I find that for many
long-running processes, the Q PROC output fails to reflect progress.  For
example, I'm deleting a filespace at the moment, and it's staying at 0 files
deleted, and (if it goes like the others I've done) will contiune to display
that right up until it is finished.

Common problem?  Some new setting I've failed to activate?

- Allen S. Rout

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: TSM Operational Report Question

2004-02-20 Thread William F. Colwell
Bruce,

The tsm or automatically provides a 24 hour time range in the sql it generates
and sends to the server.  So you will automatically get a report for the 24 hours
prior to the start time.

To experiment, define your report and right click on it, select 'refresh with the 
scheduled time'.

Hope this helps,

Bill Colwell

At 11:18 AM 2/20/2004, you wrote:
Trying to setup a new report.  What I can't figure out how to do is specify
in the query to use the time I have requested.  I have it set to time start
5:30AM Hours covered 24.


Thanks,
-
Bruce Kamp
Senior Midrange Systems Analyst
Memorial Healthcare System
E-Mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Phone: (954) 987-2020 x4597
Pager: (954) 286-9441
Alphapage: [EMAIL PROTECTED]
Fax: (954) 985-1404
-

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Migration of TSM server from OS390 or Z-OS to a UNIX or Windows platform

2004-02-12 Thread William F. Colwell
Hi Ryan,

I am at the tail end of a conversion from OS/390 to solaris.  I don't have
a document to share with you.  One nice thing that helped a lot is to do an
export server filed=none to a disk file on OS/390, FTP it using bin to the UNIX box.
On the UNIX box define a devicetype to see the directory of the FTP file, them do an
import server.  You will get all the node definitions, domains, schedules, copygroups.
This was a big help for me.

After figuring out how to define a library and tape drives, the conversion process is
a tedious matter of getting each client to connect to the new server and to do a new 
backup.

Hope this helps,

Bill Colwell

At 11:35 AM 2/12/2004, you wrote:
I know I have seen other messages about this before, so I hope I can find some info.  
We are looking into migrating our TSM servers off of Z-OS and I would like to know if 
any one else has done this and if they have useful tips and maybe even a high level 
document of the process?

Thanks!

Ryan Miller
Tivoli Certified Consultant
Tivoli Storage Manager V4.1

[EMAIL PROTECTED]

IT Systems Associate-Lead
Principal Financial
515-235-5665
measure twice..cut once!

   http://www.principal.com/   http://www.deskflag.com/




--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Migration of TSM server from OS390 or Z-OS to a UNIX or Windows platform

2004-02-12 Thread William F. Colwell
We didn't move any backups, we did fresh backups
but I will move archives using the server to server export
feature in 5.??.

The old server had just 20T of data.

Our tapes, 9840's, were not managed by ca1 or any other tape mgmt software.  I defined 
tapes
explicitly to tsm.  We are a small organization, which is why the mainframe is on the 
way out
not only for tsm but for everything.

You may be able to reuse the media, but you can't transport the backup data from 
server to
server.  You will need to do fresh backups or exports  imports.

The library on mvs is an stk silo, on solaris it is and stk l700e managed by tsm.

Bill


At 04:08 PM 2/12/2004, you wrote:
Were your libraries and tape drives managed by CA1 on the mainframe, if so, could the 
same volumes be recognized by the new media manager on UNIX.  My fear is having to 
convert or move 270 TB of data already on tape.  Or did you not move any data from 
the OS/390 side and just start with fresh backups?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
William F. Colwell
Sent: Thursday, February 12, 2004 2:42 PM
To: [EMAIL PROTECTED]
Subject: Re: Migration of TSM server from OS390 or Z-OS to a UNIX or
Windows platform


Hi Ryan,

I am at the tail end of a conversion from OS/390 to solaris.  I don't have
a document to share with you.  One nice thing that helped a lot is to do an
export server filed=none to a disk file on OS/390, FTP it using bin to the UNIX box.
On the UNIX box define a devicetype to see the directory of the FTP file, them do an
import server.  You will get all the node definitions, domains, schedules, copygroups.
This was a big help for me.

After figuring out how to define a library and tape drives, the conversion process is
a tedious matter of getting each client to connect to the new server and to do a new 
backup.

Hope this helps,

Bill Colwell

At 11:35 AM 2/12/2004, you wrote:
I know I have seen other messages about this before, so I hope I can find some info. 
 We are looking into migrating our TSM servers off of Z-OS and I would like to know 
if any one else has done this and if they have useful tips and maybe even a high 
level document of the process?

Thanks!

Ryan Miller
Tivoli Certified Consultant
Tivoli Storage Manager V4.1

[EMAIL PROTECTED]

IT Systems Associate-Lead
Principal Financial
515-235-5665
measure twice..cut once!

   http://www.principal.com/   http://www.deskflag.com/




--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Migration of TSM server from OS390 or Z-OS to a UNIX or Windows platform

2004-02-12 Thread William F. Colwell
Ryan,

IBM does have incremental server to server export at the 5202 server level.  It came in
below that but I am not sure where.  Of course you can only use it after you have 
exported the
current 270T, which is a long time from now.

Just because you do backups to a new server, you don't have to lose access to the
backups/archives on the old server.  Each client can make stanzas or shortcuts that
override the default serveraddress and/or port number.

The TDP products may be more difficult to do than plain UNIX or windows clients.

After the retextra  retonly times
have passed, you can delete the backups from the old server.  You then need to export 
the
archives.

Good luck,

Bill

At 04:48 PM 2/12/2004, you wrote:
Bill,
Thanks for the info. It looks like we may have an interesting time ahead for us.  If 
ex/import is the only way to go I'm not sure if even the incremental ex/import 
feature, if IBM ever comes out with it, will help us.  Leaving the data behind is not 
feasible, we need to have 24 hour access to it at all times.  We currently have 3 
3494 ATL's with a combined total of around 14,000 media 3 and 4 tapes. That would all 
be converted to the UNIX platform, it would be nice if the new media manager could 
see the tapes and recognize them as the same.  I may have to see if IBM could look 
into making that work

Thanks!

Ryan

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
William F. Colwell
Sent: Thursday, February 12, 2004 3:32 PM
To: [EMAIL PROTECTED]
Subject: Re: Migration of TSM server from OS390 or Z-OS to a UNIX or
Windows platform


We didn't move any backups, we did fresh backups
but I will move archives using the server to server export
feature in 5.??.

The old server had just 20T of data.

Our tapes, 9840's, were not managed by ca1 or any other tape mgmt software.  I 
defined tapes
explicitly to tsm.  We are a small organization, which is why the mainframe is on the 
way out
not only for tsm but for everything.

You may be able to reuse the media, but you can't transport the backup data from 
server to
server.  You will need to do fresh backups or exports  imports.

The library on mvs is an stk silo, on solaris it is and stk l700e managed by tsm.

Bill


At 04:08 PM 2/12/2004, you wrote:
Were your libraries and tape drives managed by CA1 on the mainframe, if so, could 
the same volumes be recognized by the new media manager on UNIX.  My fear is having 
to convert or move 270 TB of data already on tape.  Or did you not move any data 
from the OS/390 side and just start with fresh backups?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
William F. Colwell
Sent: Thursday, February 12, 2004 2:42 PM
To: [EMAIL PROTECTED]
Subject: Re: Migration of TSM server from OS390 or Z-OS to a UNIX or
Windows platform


Hi Ryan,

I am at the tail end of a conversion from OS/390 to solaris.  I don't have
a document to share with you.  One nice thing that helped a lot is to do an
export server filed=none to a disk file on OS/390, FTP it using bin to the UNIX 
box.
On the UNIX box define a devicetype to see the directory of the FTP file, them do an
import server.  You will get all the node definitions, domains, schedules, 
copygroups.
This was a big help for me.

After figuring out how to define a library and tape drives, the conversion process is
a tedious matter of getting each client to connect to the new server and to do a new 
backup.

Hope this helps,

Bill Colwell

At 11:35 AM 2/12/2004, you wrote:
I know I have seen other messages about this before, so I hope I can find some 
info.  We are looking into migrating our TSM servers off of Z-OS and I would like 
to know if any one else has done this and if they have useful tips and maybe even a 
high level document of the process?

Thanks!

Ryan Miller
Tivoli Certified Consultant
Tivoli Storage Manager V4.1

[EMAIL PROTECTED]

IT Systems Associate-Lead
Principal Financial
515-235-5665
measure twice..cut once!

   http://www.principal.com/   http://www.deskflag.com/




--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: OUTLOOK 2003, .PST open, will it backup while open?, TSM 5.2?

2004-02-11 Thread William F. Colwell
Matt,

Can you tell me the Microsoft patch number?  I searched the ms website
and can't find it.

Thanks,

Bill Colwell


At 01:11 PM 2/11/2004, you wrote:
Hello all,
We had previosly been using a MS supplied patch that would close the
MS OUTLOOK .pst file in 2 minutes of inactivity.  That allowed us to
usually get a valid backup of a PC's .pst file during the day.   Well we
just upgraded to OUTLOOK 2003 and it appears it no longer works.  We now
get the improper message ANS4007E saying that the permissions are wrong
for the file or directory.   I am currently running TSM 5.1.8.1 on the
server and 5.1.5.2 on the clients.  In 2 days the clients will have
5.2.0.6 on them.   I remmber reading that with TSM 5.2 that there is an
improvement in backing up OPEN files.
   Does anyone know if the TSM 5.2 (server/client) combination is able
to successfully backup open .pst files from OUTLOOK 2003?

Thanks in Advance
Matt

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: ANU2508E wrong write state

2004-01-12 Thread William F. Colwell
Joe - I have seen the same thing and reported it to IBM,
but so far they have just given me a list of things to check.
I encourage you to call it in too, if they see another account having
the same problem, it might prompt them to work harder on it.

Everything on the server and client checks out fine, and since
we run ~30 instances of oracle on the same machine, and the same
tdpo backs up 29 just fine, well it is a mystery.

The one odd thing about the one instance that gets these errors is
that it only happens on a level 2 backup; level 0 backups (full) run
fine.

Hope this helps,

Bill Colwell
At 12:23 PM 1/10/2004, you wrote:
Need help...

I'm getting this message prior to the TDP for Oracle backup failure.  It transmits 
appx 6G over 3 channels, but then generates the following errors in the 
tdpoerror.log.  I'm not that concerned with
the ANU2602E or the ANS4994S, but I am concerned with the ANU2508E.  Also, there is 
very little documentation on the error.
We (tsm admins) did not change any configuration on our side, and the DBA's state 
they made no changes on their side.  There is nothing in the actlog to indicate there 
is a problem.  My initial
question is, what is in the Wrong write state?  And my next question is, how do I 
resolve?  Before I call support, does anyone have any insight?

OS platform SunOS 5.8
Oracle 8.1.7
TDPv2.2.1
TSM Server V5,2,0.0 running on zOS

01/10/04   09:26:59 ANU2508E Wrong write state
01/10/04   09:28:30 ANU2602E The object /adsmorc//arch.BFRMDM.9720.515063712 was not 
found on the TSM Server
01/10/04   09:28:30 ANS4994S TDP Oracle SUN ANU0599 ANU2602E The object 
/adsmorc//arch.BFRMDM.9720.515063712 was not fou
nd on the TSM Server
01/10/04   09:28:31 ANU2602E The object /adsmorc//arch.BFRMDM.9719.515063712 was not 
found on the TSM Server
01/10/04   09:28:31 ANS4994S TDP Oracle SUN ANU0599 ANU2602E The object 
/adsmorc//arch.BFRMDM.9719.515063712 was not fou
nd on the TSM Server

Regards, Joe

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: OS/390 server memory dumps

2004-01-02 Thread William F. Colwell
Tom,

the command you are looking for is 'slip'.  You can code the command,
described in painful detail in the system commands, and enter it on the console,
or you can add it to sys1.parmlib(ieaslp00) and issue the 'set slip=00' command.

I see that the requester is OAM, not adsm even though adsm took the dump.
Maybe there is something in OAM to fix.

Sorry, I can't code the slip command for you, it is usually an iterative process to
get it right.  But as a sample, here is my slip command the make tsm take a dump
when it abends, the reverse of what you want -

SLIP SET,ERRTYP=ABEND,JOBNAME=ADSM3SRV,ID=TSM1,A=SVCD,END

Hope this helps,

Bill Colwell


At 11:45 AM 1/2/2004, you wrote:
We have a TSM 5.1.7.2 server running under OS/390 2.9 at the moment,
and expected to be running under OS/390 2.10 within the next few days.
We occasionally have a process or session cancelled while waiting for
a tape mount. This triggers a completely useless memory dump from TSM.
The OS/390 console message describing the dump is as follows:

IEA794I SVC DUMP HAS CAPTURED: 463
DUMPID=002 REQUESTED BY JOB (ADSM)
DUMP TITLE=COMPON=OAM LIBRARY AUTO COMM SERVICE,COMPID=5695-DF1
   80,ISSUER=CBRLLACS

Is there any way to suppress these dumps, either by changing TSM
settings or by using OS/390 PARMLIB members or user exits?

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Remove disk pool volume, it won't die!!

2003-10-29 Thread William F. Colwell
Mike -

I had this happen many years and releases ago when I was moving the
diskpool from one device to another.

I got rid of the problem volumes by doing a restore volume on them;
of course this assumes you are using copypools.

Do 'restore v volume_name preview=yes' and get the tapes back from offsite.
Mark the tapes onsite and do the restore without preview=yes.  When the restore
is done tsm will delete the volume.  At least that is how it worked for me.

Bill Colwell

At 03:29 PM 10/22/2003, you wrote:
I run TSM 4.2 on Solaris 8.  I was in the process of converting all of the 
 diskpool volumes from OS mounted files to raw partitions to fix some performance 
 issues we have been experiencing with backups when I hit a little problem.  I 
 disabled sessions and allowed migration to run to completion.  All volumes in the 
 diskpool were listed at 0% util.  I was able to delete all volumes except 1, it 
 claimed there was still data in it.  Still showed 0% for the volume doing a q vol, 
 but I ran an audit on the volume, it said that there was no data in it.  I restarted 
 the server, still claimed there was data in there.  I tried del vol /tsmdata1/data1 
 discardd=yes, no effect, guess that option does not work for devices of type DISK?
Now comes the point where I did something I probably shouldn't have.  I 
 stopped TSM again, deleted the file at the OS level, unmounted the partition, 
 restarted TSM, and defined this last volume using the raw partition (others I had 
 already done).  This all worked fine, but I still have this dangling volume that it 
 of course can't mount so it's offline.Still getting ANS8001I Return code 13 when I 
 try to delete the volume.  Log says:

10/22/03   20:26:51  ANR2406E DELETE VOLUME: Volume /tsmdata4/data1 still
  contains data.

Any suggestions on how I can nuke this volume once and for all.  I realize I might 
have lost a few files if this volume did in fact contain some data, but I am not that 
concerned abou that right now.  Thanks!

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile


--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: SQL to determine what offsite volumes are needed for move data

2003-10-28 Thread William F. Colwell
Hi Bill -

I am way behind on the list, but I think I can help you.

First there isn't any sql I know of that will do what you want.

But, the old query content command will give you the answer, you just need
to post process the output.

The steps are

1.  make a macro file of 'q con' commands for all the tapes you are interested in;
You can use sql to do this. For example -

q content   02  count=-1 f=d
q content   02  count=1 f=d
q content   06  count=-1 f=d
q content   06  count=1 f=d

2. Next run dsmadmc and capture the output, for example

dsmadmc -id=your id -password=your pw  macro C:\tsmmacs\qcon.mac qcon.lst

3. process qcon.lst (no example here) and sort it by filename.  You should be able to
see all the start and end tape sequences.

Hope this helps,

Bill Colwell
At 09:54 AM 10/16/2003, you wrote:
Hi,

We're running TSM server 5.1.6.2 on z/OS.

One of the (many) resources we're short on is tape drives.  Consequently,
I'm always looking to streamline processes that use these drives.  Here's
the problem I'm currently looking at:

Our tape copy pools are kept offsite.  Reclamation for these pools is
handled by leaving the TSM reclamation threshhold at 100% so that he never
does reclaims on his own.  On a regular basis, we run a job that queries
the server for copy pool tapes with a reclamation threshhold greater than
'n' percent.  This list of tapes is used by the operators to go and fetch
the tapes to be brought onsite.  They then run a job that updates those
tapes to access=readwrite, and issues a 'move data' command for each tape.

Now the problem.  Some of these 'move data' processes treat the volume
that is being emptied as 'offsite', even though the volume has been loaded
into the library and its access updated to readwrite.  I'm pretty sure the
reason for this is that the volumes in question have files at the
beginning and/or end that span to other copy pool volumes which are
themselves still offsite.

If the volume being emptied is treated as 'onsite', then the move data
runs pretty quickly - the copy pool volume is mounted for input, the data
is copied, and the volume goes pending.  However, if the volume being
emptied is treated as 'offsite', TSM will perform the move data by
mounting for input *all* of the primary pool tapes that have data on this
copy pool tape.  Since our primary tape pools are collocated, while our
copy pools are not, this results in dozens of tape mounts for primary pool
volumes to use as input.  The move data process can take hours in this
case, tying up tape drives much longer than necessary.

For the moment, I'll ignore the question of whether TSM should be smart
enough to mount only the one or two primary pool volumes that contain the
spanned files, and use the single copy pool volume that's being emptied
for all the other data.

The way I've been handling this is rather cumbersome.  These are the steps
I take:

- after the move data's have started, issue a 'q proc' command
- cancel all of the move data's that are treating their input volume as
  'offsite'
- issue an 'audit v' command for each of the copy pool volumes being
  treated as 'offsite'
- each audit volume process fails with the following message:
  ANR2456E AUDIT VOLUME: Unable to access associated volume NN -
  access mode is set to offsite.
- this tells me which additional copy pool volume needs to be brought
  onsite in order to make a move data process treat the original volume to
  be emptied as an onsite volume
- go get the additional offsite volumes needed, load them into the
  library, update their access to readwrite, and issue move data commands
  for the original volumes being treated as offsite by TSM

Then, of course, the entire process has to be repeated because the 'audit
volume' command will only tell me *one* offsite volume that might be
needed; if a volume has files that span from both the beginning and end of
the tape, I won't know that until the 2nd round of 'move data' commands is
issued.

As you can see, this is a laborious, difficult-to-automate process.
Things would be greatly simplified if we could tell right up front which
copy pool volumes were going to be treated as offsite, and which
additional copy pool volumes would be needed to be brought onsite in order
to make the move data's all run as 'onsite'.  Having this information up
front would allow me to build a list of *all* tapes needing to be brought
onsite, requiring only one trip to the offsite location, saving all the
hassle of canceling/auditing/etc.

I *know* that the information about which offsite volumes are needed must
be easy/quick to retrieve, because the 'audit volume' commands fail
instantly with the message telling me what offsite volume is needed.

So, here's my question (finally!): can anyone provide SQL that could be
used to tell me, given a copy pool volume, are there files on that volume
that span to other copy pool volumes, and, if so, what are those other
copy pool volumes?

Or 

Re: not preempted by higher priority operation

2003-09-03 Thread William F. Colwell
I am running tsm on mvs. If this happened to me, I would
take an svc dump and open a pmr with IBM.  TSM keeps
running, the dump process doesn't crash it.

But I am in the process of rehosting to solaris.  I have 2 questions,
first how would I get the same type of dump on solaris,  second,
has anyone with this problem gotten a dump and reported it to IBM?

It sounds like a bug to me and IBM won't  fix it if they don't
know about it.

My $.02

Bill

At 11:10 AM 9/3/2003, you wrote:
Yep.
See it on my busiest server, which is still at 4.2.1.15.
Doesn't happen all the time, just sometimes.

In my case it's usually when reclamation is running (because I'm ALWAYS
running reclamation).

I have always suspected it has to do with a lock issue somewhere, but have
never figured it out.

Have to kill the reclamation to get it going again.


-Original Message-
From: Debi Randolph [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 02, 2003 5:11 PM
To: [EMAIL PROTECTED]
Subject: not preempted by higher priority operation


Hi List ...

Has this ever happened to you?

I have only two tape drives, so they pretty much run non-stop.
But when someone goes in to do a restore, whatever is running gets
preempted and the restore gets completed as a priority.
But sometimes.for some reason..TSM does not preempt whatever is
running and the restore just sits there and never runs.

It happened to us this morning.   Both tape drives were migrating data, so
nothing with a seriously high priority.Neither drive was writing out
any seriously large file or seriously large client, we co-locate and were
mounting tapes left and right, we were down to the last few clients.

The restore sat there for more than an hour in a Tape Wait state.

The customer finally killed the restore.   When I restarted it later, it
ran without issue, at that time we were doing reclamation with both drives.

Server is Windows 2000
Server is SP3
TSM Server 5.1.7.0
TSM Client 5.1.5.0 (Windows 2000)
IBM 3494 library
IBM 3590 E1A drives

This isn't the first time this has happened, and it doesn't happen all the
time.  I usually get impatient and kill whatever is running so that a tape
drive becomes available.   Whenever this happens, we don't get the
preempted message in the activity log.

ANR1440I All drives in use.  Process 237 being preempted
   by higher priority operation.


Can anyone explain why this would happen?


Thanks,
Deb Randolph




***N O T I C E***
The information contained in this e-mail, and in any accompanying
documents, may constitute confidential and/or legally privileged
information.  The information is intended only for use by the
designated recipient.  If you are not the intended recipient (or
responsible for the delivery of the message to the intended
recipient), you are hereby notified that any dissemination,
distribution, copying, or other use of, or taking of any action in
reliance on this e-mail is strictly prohibited.  If you have received
this e-mail communication in error, please notify the sender
immediately and delete the message from your system.
***

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: DSMFMT takes forever ( 15 hours). 100 gb

2003-09-03 Thread William F. Colwell
Duane,

I don't have an answer for your problem, but I can tell you the same thing
happens on solaris.

My main servers are using raw partitions as recommended and they scream.
But to put up a test server I use space on filesystems.  For a server to run decently 
the
filesystem has to be mounted with 'forcedirectio,nologging' mount parameters.
But strangely, the reverse is true for formatting.  When I set up a test instance
I have to unmount  remount the filesystem without forcedirectio,nologging.
After the formatting is done I go back to forcedirectio,nologging to run the server.

The other day I formatted 37G in about 10 minutes.

Maybe aix has similar options.

Hope this helps,

Bill
At 12:22 PM 9/3/2003, you wrote:
Does anyone know of or had problems with formatting additional space for
diskpools ?

TSM server : AIX 5.1 ML3 64-bit enabled. 2GB memory 2 cpu H80

Disk type : 6 member raid set about 700 GB

JFS2

Trying to format 100gb diskpool. Each time I have tried it brings the TSM
server to almost a stand still. TSM is up on the box but nothing is running
during 8 hours of the format. Should be more than enough time to format
100gb of space.

Any ideas ?

 Duane Ochs
 Enterprise Computing

 Quad/Graphics

 Sussex, Wisconsin
 414-566-2375 phone
 414-917-0736 beeper
 [EMAIL PROTECTED]
 www.QG.com



--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Collocation groups

2003-07-08 Thread William F. Colwell
I know it isn't in 5.2.0.  I don't know when it will be delivered.

Bill

At 01:50 PM 7/8/2003, you wrote:
Is the collocation groups feature in TSM 5.2.0 or is it further out?  If
further out, any word on how far out?

David

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Script Help with Versions Data Exists

2003-07-07 Thread William F. Colwell
Ricky,

The current tsm sql engine doesn't have enough columns to give you the
answer you are looking for.  I suggest this alternate procedure;

1. Do a 'q occ nodename', add up the files and space used values for the primary pools.
2. Do an export command with preview=yes - 'exp nodename filed=backupa p=y'.
This will tell you how many active files there are and the space they take up.
3. Do the math between #1 and #2.

Hope this helps,

Bill

At 03:20 AM 7/4/2003, you wrote:
Does anyone have a script that would tell me how many extra versions (we
currently have up to 14) exists on a node and how much space they are taking
up??

The library is beggining to creak at the seems a little (STKL180) buying a
new one isn't an option so I'm trying to find ways to cut back whats stored,
I'm hopeing with the above info I can convince the business to lower the
amount of copies held per file.

Thanks in advance

..Rikk



Equitas Limited, 33 St Mary Axe, London EC3A 8LL, UK
NOTICE: This message is intended only for use by the named addressee
and may contain privileged and/or confidential information.  If you are
not the named addressee you should not disseminate, copy or take any
action in reliance on it.  If you have received this message in error
please notify [EMAIL PROTECTED] and delete the message and any
attachments accompanying it immediately.

Equitas reserve the right to monitor and/or record emails, (including the
contents thereof) sent and received via its network for any lawful business
purpose to the extent permitted by applicable law

Registered in England: Registered no. 3173352 Registered address above


--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: Apparent memory leak

2002-12-24 Thread William F. Colwell
Tom, I have seen the server get sluggish after running a long time,
and I suspect a memory leak was the cause.  I can't remember if 4.2.1.9
had the problem.  I am at 4.2.3.2 and it is running good.

Bill

At 11:37 AM 12/24/2002, you wrote:
We are running a 4.2.1.9 TSM server under OS/390. We started seeing
crippling performance problems when the database got up around 15
gigabytes. Increases in bufpoolsize, region size, and available real
memory (by shifting memory from another logical partition on the
same processor complex) had only a temporary effect. It now appears
that each time we stop and restart the server we get about one day
of pretty good performance, followed by a day of margional performance,
followed by one or more days of catostrophic performance. The buffer
pool hit rate seems to decrease steadily after each restart. This
pattern of behavior suggests a memory leak. Does the 4.2.1.9 server
have known problems in this regard?

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Mangled administrator commands

2002-12-17 Thread William F. Colwell
Tom, I have the same setup as you except the server is at 4.2.3.1.
I saw the same thing, mangled commands;  I am glad someone else
saw it.  I thought of opening a pmr, but didn't - it is too random and
not really a big deal.

Bill

At 02:44 PM 12/16/2002, you wrote:
I recently installed 5.1.5.0 client code, including the administrative
command line, on a Windows 2000 system. The system connects to a 4.2.1.9
server running under OS/390. When I use the administrative command line
I will occasionally see a command fail with an ANR2000E message showing
something completely different than the command I just typed. When this
happens the alleged command reported in the error message isn't even
composed of ASCII characters; most of the characters displayed seem to
come from some kind of line drawing and graphics set. The problem seems
to occur more often when the server is heavily loaded. Is this a known
problem? If not, does anyone have any ideas for pinning down which piece
of software is at fault?

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: HSM

2002-12-13 Thread William F. Colwell
Mark, are you referring to an application like 
IBM® Content Manager OnDemand for Multiplatforms?
(see http://www-3.ibm.com/software/data/ondemand/mp/ )

The last time I looked at this, I saw that it did use TSM as a 2nd
level store, but it doesn't use the HSM client.  Instead it uses archive
and the product database keeps track of what documents are in what
archives.

Bill


At 05:33 PM 12/11/2002, you wrote:
Gerald Wichmann wrote:

Anyone have a particularly large HSM environment and been using it a while?
I'm curious on any experiences with the product, good or bad. I would
imagine there must be some people out there who need long term storage of
infrequently accessed files and have considered it as a solution for that.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.

Hi,

I have one customer that may fall into the large HSM category.  They
have a document management program that is based around HSM.  The last
time I did an update to the server they had about 6 Million documents
each of which contains a few to many different files.  They have about
500GB of local storage for the stub files and local cacheing and are
using several 3995 Optical libraries to migrate the data to.

I also work with a different customer that has a similar environment but
it is not as large.  They are only managing about 2.5 to 3 million files
and require only one 3995 library with just over 1 TB of data.

Both of these system work really well.  Performance is very good.  In
fact one of the customers has a requisite to be able to produce any one
of the millions of documents on the screen in under 3 minutes!  That
time includes doing the search thru the application database to
determine the needed files, issuing the recall from ITSM server, loading
the Optical platter, transmitting the data, assemble it and display it
on the screen!  I was quite impressed with how well it handles that.

Like any product it has its quirks but for the most part once it is set
up it just works!

If you have any specific questions or need help with your environment
please feel free to contact me.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: 4.2.3 Upgradedb slow

2002-12-09 Thread William F. Colwell
Sam,

Even though the ptf coverletter says to do an upgradedb, it isn't
necessary coming from 4.2.2.*.  I called in a pmr on this and
level 2 confirmed it.

So, what you are seeing is your server in normal operation.

Bill

At 04:54 PM 12/5/2002, you wrote:
I have just installed the 4.2.3 server upgrade and have been running the
UPGRADEDB process for almost 3 hours now.   Has anyone else experienced
this and, if so, how long will this take.  In the past, it has only run
a couple of minutes.

This is an upgrade from 4.2.2 on OS/390 R10.  DB is about 11GB 85% used.

Thanks
Sam Sheppard
San Diego Data Processing Corp.
(858)-581-9668

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Cleanup backupgroups, system objects

2002-11-21 Thread William F. Colwell
Matt - I recently did the same server upgrade and got the same
result doing cleanup backupgroups -- 0 anything deleted.

I have never been clear on the history of the problem, that is
what server levels caused it.  So I have to wonder if the cleanup
command still has a bug in it for the OS/390 server, in that it didn't
find any problems.  I am thinking of calling in a problem to IBM/tiv.

My server level history is -

4.2.1.9 (for quite a while)
4.2.2.10 (a month?)
4.2.2.12 (a month?)
4.2.3.1 (starting 11/16)

Just before going to 4.2.2.12 I suppressed all system object backups
because of severe performance problems.

Bill


At 01:41 PM 11/21/2002, you wrote:
We assumed we had the dreaded system objects problem because we
were running 4.2.2.0, we have a bunch of Windows clients, and our
database was growing at a rate that we didn't like.

So a couple of days ago, we upgraded to 4.2.3, and ran CLEANUP BACKUPGROUPS.
It didn't seem to accomplish anything.  It finshed fairly quickly,
compared to some of the reports I've read here, and finished with the
message:

ANR4730I CLEANUP BACKUPGROUPS evaluated 28729 groups and deleted 0
orphan groups with  0 group
members deleted with completion state  'FINISHED'.


Somebody on this list suggested using the command
QUERY OCC * SYSTEM OBJECT

to get an idea of how bad the problem was.  I tried that, before and
after the upgrade, and got the same result: zip

tsm: UKCCSERVER1QUERY OCC * SYSTEM OBJECT
ANR2034E QUERY OCCUPANCY: No match found using this criteria.
ANS8001I Return code 11.


This doesn't seem to make sense to me, because if I pick a client at
random and issue a query occ for that client, I see some system
object filespaces.

tsm: UKCCSERVER1q occ grabthar *

Node Name TypeFilespace  FSIDStorage   Number of
Physical  Logical
  Name   Pool Name Files
SpaceSpace

Occupied Occupied

(MB) (MB)
--------
--
GRABTHAR  Bkup\\grabtha-1BACKUPOFF-   11
0.03 0.03
   r\f$   SITE
GRABTHAR  Bkup\\grabtha-1BACKUPONS-   11
0.03 0.03
   r\f$   ITE
GRABTHAR  Bkup\\grabtha-2BACKUPOFF-  698,512
115,597.0115,411.1
   r\e$   SITE
85
GRABTHAR  Bkup\\grabtha-2BACKUPONS-  698,512
115,987.1115,411.1
   r\e$   ITE
89
GRABTHAR  Bkup\\grabtha-3BACKUPOFF-   20,558
4,651.16 4,570.71
   r\c$   SITE
GRABTHAR  Bkup\\grabtha-3BACKUPONS-   20,558
4,642.22 4,570.71
   r\c$   ITE
GRABTHAR  BkupSYSTEM4BACKUPOFF-  106,465
13,598.7613,598.76
   OBJECT SITE
GRABTHAR  BkupSYSTEM4BACKUPONS-  106,465
13,598.7613,598.76
   OBJECT ITE


The pieces just don't seem to be fitting together. What am I missing?
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:[EMAIL PROTECTED]
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Server AIX vs NT

2002-11-11 Thread William F. Colwell
Ricardo,

I will be rehosting soon from OS/390 to either W2k or sun.
What precisely was your capacity issue?  We are leaning to W2k
right now, if our requirements are below your problem we may be
OK with W2k.

TIA,

Bill

At 03:39 PM 11/11/2002, you wrote:
Not to mention the uptime difference that you will gain from an AIX box.
We had an NT TSM server that we had to replace with an industrial strength
box due to capacity and down time issues running on NT.
Hopefully this will help.



  Whitlow, Don
  Don.Whitlow@QG. To:  [EMAIL PROTECTED]
  COM cc:
  Sent by: ADSM:  Subject: Re: Server AIX vs NT
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  T.EDU


  11/11/2002 11:23
  AM
  Please respond
  to ADSM: Dist
  Stor Manager






 From what we've seen, the difference between Wintel and RS6K/pSeries
hardware has been the I/O thruput. In our experience, Wintel
hardware/software just cannot push I/O through like our RS6K hardware
running AIX. That may be in large part due to the fact that Win32 seems to
be the limiting factor, so Linux on Intel may be ok, but we see generally
at
least 2x the I/O thruput on the AIX boxes vs. NT.

Just my 2 cents...

Don Whitlow
Quad/Graphics, Inc.
Manager - Enterprise Computing
[EMAIL PROTECTED]


-Original Message-
From: Ray Baughman [mailto:rbaughman;NATIONALMACHINERY.COM]
Sent: Monday, November 11, 2002 10:46 AM
To: [EMAIL PROTECTED]
Subject: Server AIX vs NT


Hello,

We are looking to replace our TSM server hardware, we are currently running
the TSM server on an IBM H50.  The bean counters are saying that an NT
server would be a lot cheaper than a UNIX server.  They have decided it
needs to be either an IBM UNIX server or an NT server.  Has anyone had any
experience with both NT and AIX servers, and if so what information do you
have regarding performance, stability etc. with one over the other.
Basically I've be told to either cost justify AIX or I'll end up on NT.

Any help would be appreciated.

Ray Baughman
Engineering Systems Administrator
TSM Administrator
National Machinery LLC
Phone 419-443-2257
Fax 419-443-2376
Email [EMAIL PROTECTED]

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: default directory for the macro command

2002-10-18 Thread William F. Colwell
Joel, I made a .bat file to start dsmadmc in the mac directory,
then I made a shortcut to execute the .bat file.  Here is the file -

cd macs
set dsm_dir=c:\program files\tivoli\tsm\baclient
set dsm_config=c:\program files\tivoli\tsm\baclient\dsm.opt
c:\program files\tivoli\tsm\baclient\dsmadmc.exe


The shortcut has the 'start in' field set to C:\tsmadmin.  Therfore the
macros are in c:\tsmadmin\macs.

Hope this helps,

Bill
At 05:46 PM 10/17/2002, you wrote:
Is there a way to set the default directory for the macro command to
something other than the installation directory?

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: tsm client is down-level with this server version

2002-08-29 Thread William F. Colwell

Roger, when I had to untangle the unicode cooties, I contacted
level 1  then 2 and got the unsupported commands to do it.
But they also told me that an export, delete node, import process
would do the trick to because the one nasty bit wasn't exported
or wasn't imported or something to that effect.

If the nodes aren't too big, this may be a simple way to solve
the problem without interacting with the users.

Hope this helps,

Bill

At 12:03 PM 8/29/2002, you wrote:
We've got half a dozen clients stuck like this right now. The first case
I had was an important, tenured, and extremely impatient professor who
had converted from Win 98 to Win XP, decided that XP stinks, and
wanted to format his hard drive and restore his comfortable old Win 98
system from ITSM. (This is what backup is for, right?) Because this was
basically a point-in-time restore, deleting the node was not a possible
strategy. He was incredulous that it took me several days of
communication with IBM support to straighten it out and peppered me with
emails demanding that I work faster on it the whole time I was
exchanging special commands and their outputs.

The second was an aggressively confused user who was backing up two
computers using one node, and who thereby un-did the fix mere hours
after I had spent several hours with IBM Support fixing it. I refused to
fix it again for this user and made their node restore-only until I
communicated with their supervisor about our one computer per node
policies.

I have not even begun to figure out the rest, because I know it is a
huge black hole for my time. Before I even call Support, I've got to
reach each end-user and figure out how they caused this, to insure that
they don't inadvertently un-do it after we (me and Support) spend
several hours fixing it.

Those who would want to try the solution on their own are misguided; it
cannot be done - Andy Raibeck is absolutely correct in this regard. You
must call Tivoli Support, but while they will be able to help, it will
be a lengthly and complicated process. Budget several hours per case.
And type carefully!!!

I really wish the existence of a correcting APAR was made more public.
I'm going to install it soon (today!), but I'll still have those half
dozen clients who appear to have unicode cooties to untangle.

Would it be possible to provide a safe and tested script, or an APAR
that introduces a new secret command, that can untangle this mess in
some automated way? The current method of fixing it is way too costly.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Rationalise policy domains

2002-08-29 Thread William F. Colwell

John, if you set the Query Schedule Period to a low number,
say 4 hours, you will be able to change the domain and schedule
and avoid having the client schedule obtain a schedule
which you then delete.  In the admin ref, see the set command for
query schedule period and try to find some guidance on it
in the admin guide.  I haven't done this but I think it will work.

Bill

At 11:30 AM 8/27/2002, you wrote:
Historically I have a number of policy domains based on client os.
Well it seemed a good idea at the time.
My question is I now want to rationalise all the policy domains into say two,
but with the least disruption or work at client level(none if possible)
My main concern is that I will lose all the schedules and their associations,
and that I will need on netware for example to reload the scheduler on the
client to pick
up the new schedule information.
I think I may also need to stop/ start the scheduler for nt clients
Am I correct in these beliefs?
I know that I will need to ensure that the new domains contain the same
management classes, to avoid rebinding, but are there any other things I need to
be aware of.
Thanks
John




**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Backup of Win2`K Fileserver

2002-08-01 Thread William F. Colwell

Guillaume,

Set the serialization in the copypool to dynamic and changingretries to
0.  Your txnbytelimit allows transaction as large as 250 meg;  you don't
want to let anything cause a termination and rollback of such a large transaction
which is why, I assume, you have compressalways = yes.

On subsequent incremental backups you can use a different serialization.

Hope this helps,

Bill

At 04:30 PM 7/31/2002, you wrote:
Hello

I have a BIG performance problem with the backup of a win2k fileserver. It used to be 
pretty long before but it was managable. But now the sysadmins put it on a compaq
storageworks SAN. By doing that they of course changed the drive letter. Now it has 
to do a full backup of that drive. The old drive had  1,173,414 files and 120 GB  of
data according to q occ. We compress at the client. We have backup retention set to 
2-1-NL-30. The backup had been running for 2 weeks!!! when we cancelled it to try to
tweak certain options in dsm.opt. The client is at 4.2.1.21 and the server is at 
4.1.3 (4.2.2.7 in a few weeks). Network is 100 mb. I know that journal backups will 
help
but as long as I don't get a full incremental in it doesn't do me any good. Some of 
the settings in dsm.opt :

TCPWindowsize 63
TxnByteLimit 256000
TCPWindowsize 63
compressalways yes
RESOURceutilization 10
CHAngingretries 2

The network card is set to full duplex. I wonder if an FTP test with show some 
Gremlins in the network...?? Will try it..

I'm certain the server is ok. It's a F80 with 4 processors and 1.5 GB of RAM, though 
I can't seem to get the cache hit % above 98. my bufpoolsize is 524288. DB is 22 GB
73% utilized.

I'm really stumped and I would appreciate any help

Thanks

Guillaume Gilbert

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Send Me Latest Favorite SQL Statement for the TSM SQL Session at Share in SF

2002-07-24 Thread William F. Colwell

Paul,

here is SQL I run frequently as a macro from the command line.
It makes a reasonable size list of some or all
the backups for a node that can be viewed with notepad .
The parameters are -
%1 - the nodename quoted in caps.
%2 - either = or like, unquoted not caps
%3 - if %2 is =, then one filespace name quoted and in the correct case
 or if %2 is like, then some quoted string ending in '%', it can be just '%'
%4 - a date to find only backups after a certain date; quoted and in the
form 'mm/dd/'.

Here is the macro now, I hope you can use it in your presentation.
Unfortunately I can't make SHARE this time.

- - - cut after - - -
/*  */
/* macro file to select files for a user*/
/*  */
set sqldatetimeformat i
set sqldisplaymode w
set sqlmathmode r
commit
select cast(rtrim(filespace_name) as char(16)) as filespace, -
   cast(substr(char(type),1,1)||'-'||substr(char(state),1,1) as char(3)) as state, 
-
   cast(backup_date as char(19)) as bac_date_time, -
   cast(deactivate_date as char(19)) as del_date_time, -
   cast(rtrim(hl_name)||trim(ll_name) as char(128)) as path_and_name -
 from adsm.backups -
 where node_name = %1 -
   and filespace_name %2 %3 -
   and date(backup_date) = %4 -
  c:\tsmadmin\sqllists\backups4.txt
commit

- - - cut before - - -
At 02:24 PM 7/20/2002, you wrote:
I am doing this presentation again in San Francisco.  Would appreciate your
favorite SQL statements to include.

Thanks.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Server partitioning using TSM library sharing

2002-07-23 Thread William F. Colwell

Scott, I haven't done this but I have thought about it before.
I suggest some additional steps for safety.

1. empty out the diskpool and delete the volumes.  Make sure
the copypool is a complete image.  Than take the dbbackup and restore
to server b.  Add  1/2 of the disk volumes to the diskpool in a, the other half
to the diskpool in b.
2. Assume the single server had a tape pool called POOL.  Mark it
read only in both servers and create new tapepools - poola  poolb.
Update POOL in both servers to set reusedelay to .
3. Now do the cross deletes.  Then reclaim POOL to poola in server a
and to poolb in server b.
4. When POOL is empty in both servers, delete the volumes from server a
in a way that returns them to scratch, and from server b in a way that doesn't.

You will need to invest in some extra media to do this, but it will be the
safest way.

If you go ahead, let us know how it works out.

Good luck!

Bill

At 04:26 PM 7/19/2002, you wrote:
Hello fellow TSMers,

We have a fairly large TSM server that we are planning to partition into 2
servers. In other words, add a second server and split the client nodes
across the two.

The hardware environment is two AIX TSM servers, both fibre attached to a
10 drive 3494 ATL.

What I am proposing to do (and looking for input) is as follows:

1) Restore a copy of TSM server-A's database to the new TSM server-B
2) Reconfigure the library and drive configuration on TSM server-B to be a
library client to the shared library manager server-A.
3) Identify the nodes that are to be associated with each server.
4) Identify the volumes associated with the nodes for each server (Storage
pools are currently fully collocated by node).
5) Run update libvol to change the owner on the appropriate tapes to be
owned by server-B.
6) Delete the filespace and node definitions for the nodes NOT associated
with each server.
7) Reconfigure half the nodes to point to the new server-B.
8) Pat myself on the back and take the rest of the day off :-)

It seems too simple  Has anyone done this before? (hard to type with my
fingers crossed)

The big question of course is: what happens to volume ABC123 on server-A
when the filespaces are deleted and the volume empties, gets deleted from
the storage pool, and returned to scratch? Server-B still thinks there is
data on that tape and this proposed solution would only work if he was
allowed to request a mount and read that volume.

Any and all feedback is appreciated!

Scott
Scott McCambly
AIX/NetView/ADSM Specialist - Unopsys Inc.  Ottawa, Ontario, Canada
(613)799-9269

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Region size for TSM

2002-07-22 Thread William F. Colwell

Matt, I had a similar problem and called support; level 2 explained
why region=0M is bad.  TSM takes the JCL region size and getmains
a small fraction of that to use for a cellpool.  The object of the cellpool
is to avoid doing getmains and freemains.  When it sees 0M a
too small default cellpool gets built.  I changed my JCL from 0M to
1280M and the problem went away.

When you see your server take all the CPU and stay there, it has
probably run out of cellpool space and started doing getmains.
I suspect there is a bug since cancelling processes doesn't always
make the server return to its normal low CPU state, but I haven't pressed
for a fix since changing the JCL avoids the problem.

Hope this helps,

Bill

At 07:57 AM 7/17/2002, you wrote:
Hello all,
I am looking at the TSM 5.1 Quick Start book for Os/390 + z/OS.  In
chapter 2 there is a section  'REGION SIZE FOR YOUR TIVOLI STORAGE MANAGER
SERVER.  The section goes through a process for determining the best region
size to assign to TSM.  The last sentence says,  'Specifying 0M will result
in poor server performance.'WHY?
I am dealing with a long outstanding, not solved problem.  I have my
region size up at 1000M.  (I am using a 9672-x57 that has about 7GB of
memory on the lparand a TSM server with a 25GB DB and 230 clients).   The
problem is TSM will 1 or 2 times a week  start using all of 1 processor to
do some simple work, like run 5 migration processes.   I will take TSM down
and up.  It then continues on getting work done faster and using 1/4 of one
processor.   Does TSM do something with memory assignments internally that
too large of a REGION causes a problem?
Matt

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: expiration

2002-07-11 Thread William F. Colwell

Geoff,

Inactive version will be expired from the server according to the
RETEXTRA and RETONLY parameters.  Eventually, only the
active set will remain.

If you want to keep all the inactive versions, you can copy the domain
to a new domain, update the copygroups to have all 9's, activate
the policy set, and then update the node to be in the new domain.

Hope this helps,

Bill
At 12:23 PM 7/11/2002, you wrote:
The situation is we have a node that is going to have it's daily backups to
TSM stopped. They don't want to delete the filespaces at this time. The
question is, do all the versions remain available since nothing has
triggered the system to think files were deleted? Or will older version roll
off at some point and leave only the latest version of very file on the
server. Remember the server has only been removed from a daily backup
schedule.

Thanks for the help.

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Expire data on a server that doesnt exist any longer.

2002-07-11 Thread William F. Colwell

To trim out the extra versions, I would copy the domain the
node is now in to a new domain.  Update the copygroups to these values -

verexists 1
verdeleted 1
retextra 0
retonly 

Activate the poilcyset and then update the node to be in the new domain.
The next expiration will trim all the inactive versions except for the last 
version of a deleted file.

Hope this helps,

Bill

At 10:17 AM 7/11/2002, you wrote:
 If I have backup of a server which is out of production. What can I do and
 how, if I want to expire all extra backup copies (versions data exist)
 except the last backup (versions data deleted)?
 I can change mgmtclass but I still have to do a backup to rebind my files.
 If I use a client and log on with 'dsmc -virtualnodename=xyz' and exclude
 E:\...\* , what happens then? Will it expire xyz´s E:\ files or the client
 node E:\ files?

A dsmc command as such won't affect file retentions at all. You will need
to run an incremental backup.

A 'dsmc inc' with the exclude statement and virtualnodename option shown
would affect files from xyz. However, that approach would make me very
nervous about the possibility of an incremental backup with the exclude
statement but not the option. I would be inclined to copy the options
file, put the special exclude statement and a 'nodename xyz' statement in
the copy, and run dsmc with an 'optfile' option telling it to use the
copy.

It is easier to describe the effect of a backup that excludes everything
on the E drive in terms of what happens to backups of a file than in terms
of what happens to backups of an entire population of files. Consider an
active backup of a file that was on xyz's E drive. A backup with the
exclude statement described above would cause this backup to switch to
inactive status. If the number of versions of the file was greater than
the verdeleted parameter of the relevant copy group the next 'expire
inventory' on the TSM server would delete enough of the oldest versions
to get the number of versions down to the verdeleted parameter value.
If there was still more than one version of the file subsequent 'expire
inventory' operations would remove each extra version after it reached
the age limit set by the retextra parameter. Eventually, 'expire
inventory' would remove the last backup version when it passed the age
limit set by the retonly parameter. 

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Offsite Storage of Copy Stgpool Tapes

2002-06-27 Thread William F. Colwell

Hi Chet,

I don't have drm either; here is what I do.

Run an update volume command; it lists every volume it updates.
Capture the output to create eject commands.  You can add 'preview=yes'
to the command to test it.

The update volume command is --

upd v * acc=offsite wherestg=copypool_name whereacc=readw wherest=ful,fil

Hope this helps,

Bill Colwell

Ps. My son goes to RPI, but I don't think he has ever backed up!  (gr)
And he lost his harddrive in January! (double g)



At 03:36 PM 6/27/2002, you wrote:
Hi,

We're in the process of creating off-site copies of our primary stgpools.
I've copied primary stgpools to copy stgpools, and have events scheduled to
keep the copy stgpools up to date.

Now, I need to figure out the mechanics of moving the copy tapes offsite,
and keeping track of them.

We don't have Disaster Recovery Manager licensed, so that limits our
options somewhat.

Any guidance would be appreciated.

Chet Osborn
Staff Systems Programmer
Rensselaer Polytechnic Institute

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: unknown API-Error 425

2002-06-07 Thread William F. Colwell

Wolf-C,

Here is a part of the dsmrc.h file that documents the tsm api return codes.

/*---*/
/* 400-430  for options  */
/*---*/
#define DSM_RC_INVALID_OPT  400 /* invalid option   */
#define DSM_RC_NO_HOST_ADDR 405 /* Not enuf info to connect server  */
#define DSM_RC_NO_OPT_FILE  406 /*No default user configuration file*/
#define DSM_RC_MACHINE_SAME 408 /* -MACHINENAME same as real name   */
#define DSM_RC_INVALID_SERVER   409 /* Invalid server name from client  */
#define DSM_RC_INVALID_KEYWORD  410 /* Invalid option keyword   */
#define DSM_RC_PATTERN_TOO_COMPLEX  411 /* Can't match Include/Exclude entry*/
#define DSM_RC_NO_CLOSING_BRACKET   412 /* Missing closing bracket inc/excl */
#define DSM_RC_OPT_CLIENT_NOT_ACCEPTING 417/* Client doesn't accept this option*/
   /* from the server   */
#define DSM_RC_OPT_CLIENT_DOES_NOT_WANT 418/* Client doesn't want this value*/
   /* from the server   */
#define DSM_RC_OPT_NO_INCLEXCL_FILE 419   /* inclexcl file not found*/
#define DSM_RC_OPT_OPEN_FAILURE 420   /* can't open file*/
#define DSM_RC_OPT_INV_NODENAME 421/* used for Windows if nodename=local
  machine when CLUSTERNODE=YES   */
#define DSM_RC_CLUSTER_NOT_ENABLED  422/* cluster server is not running or   */
   /* installed when CLUSTERNODE=YES */
#define DSM_RC_OPT_NODENAME_INVALID 423/* generic invalid nodename   */

As ou can see, 425 isn't here, but then, that is what the message is saying, isn't it?
I think you should contact Tivoli support.

Hope this helps,

Bill


At 09:12 AM 6/7/2002, you wrote:
 Dear specalist and customers,
 
 I need some help for the TDP 2.2 Product of our Exchange Servers.
 We backed up our Information-Stores over a GB/Ethernet  tcp/ip connection
 with very bad performance and long durations.
 After some tunings we got an unknown API-Error Code 425.
 
 But first some Hardwareinformations:
 We use Compaq-Server with four 450 MHz prozzessors, 4 GB Ram,
 GB/Eth.-card, Exhange 5.5 incl. SP4, Win NT 4.0 incl. SP 6a,
 TSM B/A Client 4.2.1.16, TDP 2.2 and in the Backend a Comparex Tetragon
 Disk-Subsystem.
 
 To tune the performance we opened a problemrecord with IBM and changed
 some parameters (Buffersize from 1024 to 8192 and Buffers from 3 to 8).
 After this tunings we got the first time the code 425 and we doesn 't
 got out of this Problem yet by the way of uesing GB/Ethernet.
 Does anyone have also bad Experiance with this Error ? 
 
 IBM told us, that this problem is based on the Microsoft API-Interface.
 There should be an HRBACKUPREAD-Problem with filling up the Buffers with
 Datas as fast as it should be done.
 The time is probably  too long  and than it becomes a timeout followed by
 the unknown API-Error.
 Is there someone who can help us?
 May a lot of TDP users have the same, unexplained problem.
 It would be nice to here of you soon.
 
 kindley regards
 
 Wolf-Christian Lücke
 
 K-DOI-44
 Volkswagen AG Wolfsburg
 Storage Management
 Windows Platform
 
 Tel.: 05361/9-22124
 E-Fax:05361/9-57-22124
 
 E-Mail:   mailto:[EMAIL PROTECTED]
 

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Deleting a portion of a filespace

2002-06-06 Thread William F. Colwell

Luke, it can be done but it is a multi step process involving both
the server and the client.  Do this before the data is deleted from the client.

1. In the server make a new mgmtclass called DELETE_NOW.
Set the backup copygroup numbers to their lowest allowed values -
1,0,0,0 (vere, rete, verd, reto).  Activate the policyset.
2.  Go to the client and add an include statement -
 include /data/mars02lib/.../* DELETE_NOW
3. Run an incremental.
4. delete the data from the client.
5. run an incremental
6. run expire inventory

Do you think your done? Not yet!  This process only cleans up
files that still existed on the client.  Files that were deleted from the
client were not rebound.  To clean them up, do a SQL select to find
the path and name.  Turn the output into a script to make dummy
versions of these files on the client.  Then go back to step 3.  You can
also get the names from DSMC on the client -
q ba /data/mars02lib/* - su=y -ina out.file

Good luck,

Bill

At 07:39 PM 6/5/2002, you wrote:
Hi All,
Client version: 4.2.1.0
Client OS: Solaris 5.6
Server OS: Solaris 8
TSM Server Version: 4.2.1.15

We have a NFS mounted NetApp filer that retains their critical data.
The data is is library with a number of different subdirectories for
each project.  They require we keep all data forever until they specify
it may be deleted.  They've asked us to set up verexists  and
verdeleted .  Being TSM views the mount /data as one filespace,
I'm wondering if we can purge a portion of that filespace when they
approve it.  For example a subdirectory may be /data/mars02lib/...
After the project has been terminated they would like us to delete the
mars02lib but still retain all of the other libraries under /data.  Any
suggestions/advice would be highly appreciated.  Thanks!

Luke Dahl
NASA-Jet Propulsion Laboratory
818-354-7117

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Virtualnodename issue concerning Windows and AIX

2002-05-16 Thread William F. Colwell

I had this type of thing happen to a Mac client.  A win* 4.2.1.26 client
touched a mac node, it didn't even back anything up, just connected, but
it caused the mac client to say it was downlevel.  So I got the 5.1 mac client
and installed it and it still said it was downlevel!

At this point I called a problem into Tivoli;  the level 1 rep can give out a
procedure of show commands and other unsupported things to fix the
node record of the mac node to 'undownlevel' it.  It has to do with unicode
(no surprize there).

I think the rep said it was a bug, I don't know what the resolution will be or when.

So call Tivoli support and you can undownlevel your aix node.

Hope this helps,

Bill


At 03:43 PM 5/16/2002 +0200, you wrote:
Eric,

You are right about those statements. But the thing that scares me, is that
it is at all possible to connect as an AIX node to TSM, using windows TSM
client software, regardles of the version.
You can train users to any extend, but you cannot always predict the actions
they will take. What i would like to see, is that the TSM server prevents
the update in the database for a specific node when the client platforms
don't match, or when there is only read actions involved. Like queries or
restores.



Ilja G. Coolen


  _

ABP / USZO
CIS / BS / TB / Storage Management
Telefoon : +31(0)45  579 7938
Fax  : +31(0)45  579 3990
Email: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Intranet
: Storage Web
http://intranet/cis_bstb/html_content/sm/index_sm.htm

  _

- Everybody has a photographic memory, some just don't have film. -


-Oorspronkelijk bericht-
Van: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]]
Verzonden: donderdag 16 mei 2002 13:31
Aan: [EMAIL PROTECTED]
Onderwerp: Re: Virtualnodename issue concerning Windows and AIX


Hi Ilja!
I don't think it's a filespace renaming problem. I think you are running
into the fact that once you connect with a newer client version, you can't
go back to an old version.
This is stated in the readme:
- Data that has been backed up or archived from a TSM V4.2 client using the
encryption feature, or from any TSM V4.2 Windows client, cannot be restored
or retrieved to any previous level client, regardless of the TSM server
level. The data must be restored or retrieved by a V4.2.0 or higher level
client.
Personally I don't see this as a bug. One just should not connect to TSM
pretending to be another user as long as you are using different client
levels or OS.
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Ilja G. Coolen [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 15, 2002 17:08
To: [EMAIL PROTECTED]
Subject: Virtualnodename issue concerning Windows and AIX


Hello there,

We stumbled onto the following problem, the hard way i may add.

We have a mixed environment using AIX, Windows NT/2K and Sun Solaris.
The Sun and NT boxes all run a very recent TSM client level.
Some AIX boxes are running TSM client code by now as well, but most of the
aren't on AIX 4.3.3 yet, so we can't upgrade them all.

Our TSM server runs on AIX 4.3.3.x at level 4.2.1.9.

Here comes the problem.
One of our DBA's connected to TSM server using a windows tsm client,
pretending to be a certain AIX node. Due to the optionsettings, the
AUTOFSRENAME action kicked in, updating the FS information on that AIX box.

The q node showed the following.

tsm: ABPTSM1q node rs6sv090*



Node Name Platform Policy Domain  Days Since
Days Since Locked?
   Name   Last Acce-
Password
  ss
Set
-  -- --
-- ---
RS6SV090  AIX  DOMRS_TST  1
1   No
RS6SV090Z WinNTDOMRS_TST   1
182   No


The *Z node is the renamed version, so we could continue the backups to the
original node name, being rs6sv090.
As you can see, the *Z AIX box is shown as a NT box. Huh?!

When connecting the original node from the AIX command line, we are told
that the client is down-level, so we cannot connect.
No backups or restores are possible using the original node. No backups or
restores are possible from any AIX node, no mather which client code we run.

To make the original data available, we exported the *Z data, and imported
it into the original nodename, without updating the client information using
the replacedefs=no option during the import. This took a while though.

I think it's not such a good functionality, that we can use a windows tsm
client to connect as an AIX box, rendering the AIX data unavailable.
To me, this feels like a BUG.

I'm working on the following questions:
Did any of you run into such a problem too?
Does anyone know of a way to prevent someone from performing the connect
like this?
Is this 

Re: MPTHREADING

2002-05-14 Thread William F. Colwell

David - I am running exactly the same tsm and OS/390, but with mpthreading on.
I haven't seen any problems caused by it.  I turned it on at the suggestion of
tsm level 2 support to fix a pmr, I don't remember what the problem was.
I have just one processor, but level 2 assured me it would help anyway.

Bill

At 03:10 PM 5/14/2002 -0400, you wrote:
We are running OS390/2.10 with TSM 4.2.1.9 currently we have MPTHREADING
set to NO.
 I was considering changing it to yes. Has anyone had problems with the
MPTHREADING?
Also, what if any improvements can I expect?

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Recovery Log almost 100%

2002-05-03 Thread William F. Colwell

Paul - I have run *SM on a mainframe since 1992 - my company
participated in both the alpha and beta test.  DB2 was never used
for the database.  What I have heard is that the database engine
in tsm is an early model of DB2, so we are all using DB2 in a sense.

No doubt you are right that things are optimized, but the question is
if the internal database is a sow's ear, how much can it be optimized;
it will never be a silk purse.

I don't know if tsm would perform better with today's version of db2
or oracle.  People with large tsm db's aren't looking for better performance,
we are looking for better manageability.  With commercial dbms's you
don't have the logging problem, you can do multithreaded db backups,
you can reorg it without an extended outage.

In any case I think this is just an academic debate; I am 99% certain that
it will never happen if only for the reason that vendors like blackboxes
to protect their software assets.

- Bill

At 07:13 PM 5/2/2002 -0400, you wrote:
ADSM on the mainframe use to use DB2 and they killed it because of the cost
to the customer, both maintaining another product and the actual license
cost.  The TSM relational engine is optimized for TSM processing.  A general
relational database cannot hold a candle to the performance differences on
equal hardware.  The other issue is Tivoli does not want you mucking around
in the tables updating them with update commands.  The referential integrity
is paramount to TSM stability.  The real missing piece in TSM's DB engine is
the ability to partition the database and parallel backup/restore.  More
important is the 4K block size that kills IO performance on sequential
operations such as a backup/restore.  I think Tivoli will see the need and
do something about it.  These have been discussed at Share.

You will find that black box is what most customers require to protect
themselves.  I realize if they used a general RDBMS that we could extend the
code of TSM significantly further than with the current command
capabilities.  But, that is exactly what they want to prevent.  You end up
with large customers developing extensions that are impacted by Tivoli
architectural changes that carry a loud voice about making changes that
affect them and thus prevent progress.  I am one of them, trust me it
happens.

This all said.  If you can define the requirements that a RDBMS would solve
for you that I have not mentioned, we will carry those requirements forward.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 02, 2002 2:05 PM
To: [EMAIL PROTECTED]
Subject: Re: Recovery Log almost 100%


 and that is JUST the problem.

I used to (try to) run an IBM lan management product that used DB/2 as its
database underneath. IT WAS IMPOSSIBLE.

Every problem we ran into, we got finger pointing - the product people said
they were waiting for DB/2 to to fix the problem, the DB/2 people said they
couldn't fix it because it was a product problem.

YOU DONT WANT TO GO THERE!

CRINGE AND BE AFRAID

My opinions and nobody else's...
Wanda Prather


-Original Message-
From: William F. Colwell [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 02, 2002 12:28 PM
To: [EMAIL PROTECTED]
Subject: Re: Recovery Log almost 100%


Tom, I like the taste of this food for thought!

I have raised this issue with TSM developers at SHARE  and the short summary
of their response is Cringe.  So I don't think it will happen anytime soon
if and most likely it will never happen.  I agree with you completely that
it would be a great option for site with large databases.  Plus TSM would
have 2 development teams working on the product - the current one plus the
database developers who are always trying to make Oracle, DB2 etc. faster.

- Bill


At 06:20 AM 5/2/2002 -0700, you wrote:
I wonder, also, if there is still any discussion about supporting the
use of an alternate RDBMS underneat TSM. It is quite clear that there
are many more sites with database sizes in the
25-50GB+ range. Five years ago I felt very lonely with a database
of this size, but given the discussions on the listserv over the past
year I feel more comfortable that we are no longer one of the only
sites supporting TSM instances that large. It has always seemed to me
that the database functions of TSM have been the most problematic
(deadlock issues, log full issues, SQL query performance problems,
complicated and unclear recommendations for physical database layout,
etc.). All of these problems have been solved by Oracle, DB2, and
Sybase. Granted there is the issue that plugging in an external
database adds greatly to the complexity of TSM, and reduces it's black
box-ness, but I think the resources are available to administer such a
beast at the large sites that require very large databases.

More food for thought *early* on a Thursday morning.

 -- Tom

Thomas A. La Porte
DreamWorks SKG

Re: Recovery Log almost 100%

2002-05-02 Thread William F. Colwell

Tom, I like the taste of this food for thought!

I have raised this issue with TSM developers at SHARE  and the
short summary of their response is Cringe.  So I don't
think it will happen anytime soon if and most likely it will
never happen.  I agree with you completely that it would be
a great option for site with large databases.  Plus TSM would
have 2 development teams working on the product - the current one
plus the database developers who are always trying to make Oracle,
DB2 etc. faster.

- Bill


At 06:20 AM 5/2/2002 -0700, you wrote:
I wonder, also, if there is still any discussion about supporting
the use of an alternate RDBMS underneat TSM. It is quite clear
that there are many more sites with database sizes in the
25-50GB+ range. Five years ago I felt very lonely with a database
of this size, but given the discussions on the listserv over the
past year I feel more comfortable that we are no longer one of
the only sites supporting TSM instances that large. It has always
seemed to me that the database functions of TSM have been the
most problematic (deadlock issues, log full issues, SQL query
performance problems, complicated and unclear recommendations for
physical database layout, etc.). All of these problems have been
solved by Oracle, DB2, and Sybase. Granted there is the issue
that plugging in an external database adds greatly to the
complexity of TSM, and reduces it's black box-ness, but I think
the resources are available to administer such a beast at
the large sites that require very large databases.

More food for thought *early* on a Thursday morning.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]




--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Last write date for disk volumes

2002-05-02 Thread William F. Colwell

Tom, I am at the same level as you on OS/390 and get the same results,
which is goodness.  I think it is sloppy documentation.  The online help and the
pdf have all the fields filed in, such as %reclaimable, times_mounted, etc
which make no sense for a random access volume.

At 04:36 PM 5/2/2002 -0400, you wrote:
I manage a 4.2.1.9 server running under OS/390. When I run a 'query volume'
command against my disk storage pool volumes the 'Approx. Date Last Written'
fields are blank. When I execute 'help q v' the online help
includes sample output showing a last write date for a disk storage
pool volume. Am I the victim of sloppy code or sloppy documentation?

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Database Unload

2002-04-18 Thread William F. Colwell

Jeff,

I unload some of the tsm tables daily and then load them into db2.
The method will depend on your server platform and the target dbms.

My server is on OS/390 and the target is db2.  Briefly the process
is a batch job that does 'select * from nodes' with the output
going to a file.  the file is massaged to make it db2 loadable, then the
file is passed to the db2 load utility.  I do this daily for nodes, filespaces
and associations, weekly for occupancy.

Just this week I was asked to analyze the effects of changing the copygroup
parameters.  I am doing this by 'select columns from backups' for some
randomly chosen filespaces, then loading the output into db2.

If you want more info you can email me directly.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

At 04:24 PM 4/17/2002 +, you wrote:
I have read a document presented at a SHARE conference in July 2001in
relation to the TSM database structure. In the SQL section of that
document, there is mention of a method to unloading to a true relational
database for
complicated query processing. However, the document does not describe the
method. Anyone out there have a suggestion as to how we can achieve this.

The document was written by Dave Cannon from the TSM development team

Thanks

Jeff White

*

This e-mail may contain confidential information or be privileged. It is intended to 
be read and used only by the named recipient(s). If you are not the intended 
recipient(s) please notify us immediately so that we can make arrangements for its 
return: you should not disclose the contents of this e-mail to any other person, or 
take any copies. Unless stated otherwise by an authorised individual, nothing 
contained in this e-mail is intended to create binding legal obligations between us 
and opinions expressed are those of the individual author.

The CIS marketing group, which is regulated for Investment Business by the Financial 
Services Authority, includes:
Co-operative Insurance Society Limited Registered in England number 3615R - for life 
assurance and pensions
CIS Unit Managers Limited Registered in England and Wales number 2369965  - for unit 
trusts and PEPs
CIS Policyholder Services Limited Registered in England and Wales number 3390839 - 
for ISAs and investment products bearing the CIS name
Registered offices: Miller Street, Manchester M60 0AL   Telephone  0161-832-8686   
Internet  http://www.cis.co.uk   E-mail [EMAIL PROTECTED]

CIS Deposit and Instant Access Savings Accounts are held with The Co-operative Bank 
p.l.c., registered in England and Wales number 990937, P.O. Box 101, 1 Balloon 
Street, Manchester M60 4EP, and administered by CIS Policyholder Services Limited as 
agent of the Bank.

CIS is a member of the General Insurance Standards Council

CIS  the CIS logo (R) Co-operative Insurance Society Limited





Re: How do I know what tapes my data is on? - Part II

2002-04-15 Thread William F. Colwell

Roy - create a separate tape storagepool for this one customer.
Allow it to use unlimited scratches. After each month's archive
there will be one or more tapes in readwrite access with a status of
full or filling.  Update these tapes to readonly, eject them and send offsite.
Whenever you need to find the tape from a particular month's archive, do
this select -

Select volume_name from volumes
 where stgpool_name = 'specialarchivepool'
 and year(last_write_date) = year-in-question
  and month(last_write_date) = month-in-question

Hope this helps,

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



At 10:39 AM 4/15/2002 +0100, you wrote:
Sorry Guys,

Maybe I didn't explain myself very well. Here is the situation:-

We have a customer who now requires that EVERY month, an archive is
taken to a management class that may be 7 or 10 years.

Obviously over a period of time, this data will be migrated from disk
to tape, and then the ONSITE copies will just sit in the library taking
up valuable slots.

I don't want this to happen!. Over a prolonged period, we will run out
of slots!. So, the solution that I came up with, is to save directly to
tape, and then send those tapes offsite, thus freeing up the slots.
Now obviously, this is NOT an ideal solution, I realise that!. But what
other options do I have?.

Kind Regards,

Roy Lake
RS/6000  TSM Systems Administration Team
Tibbett  Britten European I.T.
Judd House
Ripple Road
Barking
Essex IG11 0TU
0208 526 8853
[EMAIL PROTECTED]

This e-mail message has been scanned using Sophos Sweep http://www.sophos.com

**
---  IMPORTANT INFORMATION -
This message is intended only for the use of the Person(s) ('the
intended recipient') to whom it was addressed. It may contain
information which is privileged  confidential within the meaning
of applicable law. Accordingly any dissemination, distribution,
copying, or other use of this message, or any of its contents,
by any person other than the intended recipient may constitute
a breach of civil or criminal law and is strictly prohibited.

If you are NOT the intended recipient please contact the sender
and dispose of this email as soon as possible. If in doubt please
contact European IT on 0870 607 6777 (+44 20 85 26 88 88).
This message has been sent via the Public Internet.
**



Timesheets in excel

2002-04-05 Thread William F. Colwell

Someone sent this to the adsm-l discussion list by mistake.  I am passing it
on as a curiosity that companies use excel to report time.

- bill

At 10:42 AM 4/5/2002 -0500, you wrote:
 joseph_wholey_040402.xls

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]



--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Monthly Backups, ...again!

2002-04-04 Thread William F. Colwell

Do you use copypools?  How much do you want to spend?
How fast do you need to restore from a monthly?

How about this scenario; it is just an idea, I don't actually do it  -

make a 2nd copypool that stays on site.  Don't do any reclaims of it.
At the start of the month do a db snapshot backup and retain the tape for a year.
When you need to do a restore from a monthly backup, restore the
appropriate monthly db snapshot to a new instance of TSM.  See the admin
reference appendix for 'restore a db without a volumehistory file'.
In the restored instance, mark all the tapes in the primary pool destroyed and the 
diskpool too..
Also be sure all the tapes in the offsite copy pool are marked offsite or destroyed 
too.
Now a client file restore should access the onsite copypool.  The file should
be there since you have not done any reclaims of this pool.

A year into this process you can start doing 'move data recon=yes' on the
onsite copypool volumes, selecting volumes written more than a year ago.

The cost is mostly media if you have enough processor to make the 2nd copypool.
Your database will be bigger and you will need extra disk space for the db restore.

What do you think?

Hope this helps,

Bill Colwell

At 08:51 AM 4/4/2002 -0500, you wrote:
A question was brought up while discussing retention policies.

Currently we have the following retentions:

PolicyPolicyMgmt  Copy  Versions Versions   Retain  Retain
DomainSet Name  Class Group Data DataExtraOnly
NameName  NameExists  Deleted Versions Version
- - - -    ---
COLD  ACTIVECOLD  STANDARD 215  30

NOVELLACTIVEDIRMC STANDARD301  120 365
NOVELLACTIVESTANDARD  STANDARD301  120 365

RECON ACTIVEDIRMC STANDARD363   75 385
RECON ACTIVEMC_RECON  STANDARD261   60 365

STANDARD  ACTIVEDIRMC STANDARD261   60 365
STANDARD  ACTIVESTANDARD  STANDARD261   60 365


UNIX  ACTIVEMC_UNIX   STANDARD301   60  30


I believe that this provides for daily backups for over a month.

There was a request to have the following:
1)   Daily backups for a week.
2)   Weekly backups for a month.
3)   Monthly backups for a year.

I believe we are providing 1  2.  We are providing daily backups for a
month.

How can I provide monthly backups for a year?
I know that I could take monthly archives, but this would exceed our backup
windows and would increase our resources ( db, tapes, etc.)
Also, I know we could lengthen our retention policies.
Also we could create backup sets. (tons of tapes!)

How are other people handling this?

Thanks,


Marc Levitan
Storage Manager
PFPC Global Fund Services

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: OS390 server patches

2002-04-03 Thread William F. Colwell

Yes  I tried to get there just before I replied.  I set an icon up in
Hummingbird to store the id, password  path.  The id and password
used to be in the anr readme member that smp installs into sanrsamp.
If you have a v3 sanrsamp lying around you can find it in there.  for some
reason it was removed from the v4 members.  Or, as someone else said,
get it from support.

Hope this helps,

Bill Colwell

At 10:53 AM 4/3/2002 -0500, you wrote:
Have you actually tried to get there ?

I just did and No Such Directory !


Zoltan Forray
Virginia Commonwealth University - University Computing Center
e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807




William F. Colwell [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
04/01/2002 12:37 PM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: OS390 server patches


Zoltan, The patches are at

service2.boulder.ibm.com/storage/tivoli-storage-management/patches/server/MVS-OS390

Hope this helps,

Bill

At 11:57 AM 4/1/2002 -0500, you wrote:
Anyone got a clue where the OS390 TSM server patches have gone ?

The link from Tivoli goes to:
ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/server/v4r2/MVS-OS390/
  but it says it can't find them.

Also, my docs say to go to service2.boulder.ibm.com/storage   but there
isn't such a directory ?

What gives ?


Zoltan Forray
Virginia Commonwealth University - University Computing Center
e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: TSM database maximum recommended size

2002-04-03 Thread William F. Colwell

Joe - my db is pretty big on s/390 --

tsm: NEW_ADSM_SERVER_ON_PORT_1600q db f=d

  Available Space (MB): 185,400
Assigned Capacity (MB): 185,400
Maximum Extension (MB): 0
Maximum Reduction (MB): 28,396
 Page Size (bytes): 4,096
Total Usable Pages: 47,462,400
Used Pages: 40,125,241
  Pct Util: 84.5
 Max. Pct Util: 84.5
  Physical Volumes: 50
 Buffer Pool Pages: 65,536
 Total Buffer Requests: 100,564,344
Cache Hit Pct.: 96.91
   Cache Wait Pct.: 0.00
   Backup in Progress?: No
Type of Backup In Progress:
  Incrementals Since Last Full: 3
Changed Since Last Backup (MB): 1,641.23
Percentage Changed: 1.05
Last Complete Backup Date/Time: 04/02/2002 15:46:09

The performance is ok I think on hardware the isn't all that fast;
s390 = 9672-ra6 (80 mips, 1 processor)
disk = stk sva 9500
tape = stk 9840 in a powderhrn silo.

I do a full dbb on the weekend, it takes 6 hrs.
I do daily incr dbb's to sms manged disk.
Expiration is the slowest thing, it runs only on Saturday from 5 am to 11 pm;
it takes 3 Saturdays to make a complete pass.

We backup 50- 100 gb a day, mostly small files from enduser desktop machines.

While I am satisfied with all this, the new management has decreed the death
of ibm mainframe computing so I will be moving off to ??? starting in the
next 6 months.  I am interested in the sizing issue like alot of others, but I
don't have any good answers.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



At 12:17 PM 4/3/2002 -0500, you wrote:
Can anyone comment on the DB size on an S390 TSM deployment?

-Original Message-
From: Bill Mansfield [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 03, 2002 11:56 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM database maximum recommended size


80 GB is in the range of what I think of as barely manageable on your
average midrange Unix computer (IBM H80, Sun E450).  I know folks run
bigger DBs (somebody's got one in excess of 160GB) but you need
substantial server hardware and very fast disks.

_
William Mansfield
Senior Consultant
Solution Technology, Inc





Scott McCambly [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
04/03/2002 11:08 AM
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:TSM database maximum recommended size


Hello all,

Our data storage requirements have been driving our TSM database usage to
new heights, and we are looking for some references to just how big is
too
big in order to help build a business case for deploying additional
servers.

Our current TSM database is 86.5GB and although backup and restore
operations are running fine, database management tasks are becoming
painfully long.  I personally think this database size is too big.  What
are your thoughts?
How many sites are running with databases this large or larger? On what
hardware?

IBM/Tivoli's maxiumum size recommendations seem to grow each year with the
introduction of more powerful hardware platforms, but issues such as
single
threaded db backup and restore continue to pose problems to unlimited
growth.

Any input would be greatly appreciated.

Thanks,

Scott.
Scott McCambly
AIX/NetView/ADSM Specialist - Unopsys Inc.  Ottawa, Ontario, Canada
(613)799-9269



Re: OS390 server patches

2002-04-01 Thread William F. Colwell

Zoltan, The patches are at

service2.boulder.ibm.com/storage/tivoli-storage-management/patches/server/MVS-OS390

Hope this helps,

Bill

At 11:57 AM 4/1/2002 -0500, you wrote:
Anyone got a clue where the OS390 TSM server patches have gone ?

The link from Tivoli goes to:
ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/server/v4r2/MVS-OS390/
  but it says it can't find them.

Also, my docs say to go to service2.boulder.ibm.com/storage   but there
isn't such a directory ?

What gives ?


Zoltan Forray
Virginia Commonwealth University - University Computing Center
e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: People running OS/390 TSM servers - maybe for the development guys......

2002-03-20 Thread William F. Colwell

Christo,

I have had hi CPU problems in the past.  I took svc dumps and reported
it to tsm.  Level 2 confirmed that the dump showed lots of activity in
getmain  freemain.  They said tsm should not be doing any getmains or
freemains unless it runs out of it preformatted cellpools.  The size of
the preformatted area is a small percentage of the region in the JCL.
I was running with region=0m; for some forgotten reason I thought that was the right
thing to do.  Level 2 said to change it to a real value; I set it to 512m
and the server has behaved better ever since.

But there are still hi CPU problems now and then, they seem to occur after the
server has been running a long time.  Just last week I set up automation to
restart the server on Sunday.

Hope this helps,

Bill Colwell
C. S. Draper Lab
Cambridge Ma.



At 12:55 PM 3/20/2002 +0200, you wrote:
Hi all,

OK - For those of you that does not know anything about
OS/390 or don't run TSM servers on OS/390 this might be
very boring and maybe a bit cryptic - ignore or just
read for the fun.
Now for the people that still run OS/390 TSM servers:

I have always had my doubts about the scalability of
OS/390 when it comes to a TSM server.
Some of you might have seen me posting last year and early
this year about the size of your TSM environment and if
you are happy with the performance of TSM on OS/390 - only
one guy answered giving me specs on the environment they
run(Thanks to William Colwell), but other than that
most people said they moved off OS/390 and are now
running AIX or Sun or even Win2K servers.
William described their environment as an 80 Mips
processor box and a 166Gig db 88% used - and on top
of all that he has good performance from this.

Our environment is a 38 Gig db 80% used and we have
crappy performance. Now in Adsm V3 a new parameter
was introduced called MPTHREADING YES or NO. What
this does is tell TSM that it can start multiple
TCB's on multiple CP's - if you have multiple CP's
on your box.
Now we enabled this and noticed that TSM gets its
knickers in a knot when there are too many things
happening and the system is CPU constrained. In
WLM it is guarenteed to get CPU and in WDM you can
see that about 30% of the time it is delayed for
CPU. What we have done now in Omegamon is to check
how many work each of the TCB's does and then try
and figure out why TSM would perform crappy even though
it is getting about 50% of the total box.
Now - here comes the part where development might
be able to shed some light:

The TCB called dsmserv (the server task), has a
lmod caled IEANUC01 that has a CSECT of IGVGPVT
that uses about 90-95% of all the CPU cycles - remember
that this is one TCB assigned to one CP.
On further investigation our IBM'er came back and said
this IGVGPVT part controls getmain/freemain's for TSM.
Now here comes the question:
How can TSM be using 90% of a CP by just doing getmain/freemain
and all the other TCB's that has a CP allocated just sit
and wait for getmain/freemain's. This looks like a
serious scalability issue with TSM on a multiple
CP environment. According to our performance and MVS
guys the only way we are going to squeeze more juice
out of our OS/390 system with TSM is to split the
workload of TSM and run two TSM servers on one LPAR
or upgrade each CP to a more powerfull CP.
Is this in line with development, and the way it should work?

Thanks
Christo Heuer
ASBA Bank
Johannesburg
SOUTH AFRICA
__
The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free.

--



Re: Determining deleted files

2002-03-20 Thread William F. Colwell

Scott, you can get the names of deleted files from SQL queries on the
backups table, but with difficulty.  I don't have any SQL to post here, but
here is what you need to do.

Select from the backups table where the deactivate date is the current date,
assuming the backup happened between 00:00 and now when you are
running the query.  The results will be files deleted and files changed.
Direct the output to a file 1.

Then select from backups where backup_date is today and save the output to file  2.

Compare files 1  2; anything in file 1 but not in file 2 is a deleted file.  I 
suggest doing
2 selects and a compare because I don't think the tsm SQL processor would do this
efficiently in one query, if it can do it at all.

As for drastic size changes, I don't know of any way to determine that.  The contents
table reports sizes but usually it is the size of an aggregate and not the size of an 
individual file.

At 08:30 AM 3/20/2002 -0700, you wrote:
I would like to use TSM to determine what files have been deleted from a
file server the previous day (or any given day if possible).
For example, on Monday TSM backed up the file bob.txt, but on Tuesday that
file had been deleted so it was not backed up.  I would think that some sort
of query on Retain Only Version or Versions deleted may do it, but I don't
know how.

A related question:  Can I use TSM to determine if any files have been
drastically reduced in size?  For example on Monday bob.txt was 100K when
backed up, but on Tuesday it was only 1K when backed up.  I would not have a
specific file in mind, so this would also need to be done on all files.

Thanks for any insight.

Scott Foley

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Large TSM Client Restore Question

2002-03-20 Thread William F. Colwell

Ian, with todays tsm each tape would be mounted sequentially.  With
tomorrows tsm each tape could be mounted in parallel if you have enough
drives.  This is a feature in tsm 5.1, due out very soon, on 4/9/02 is what I have
heard.

At 04:33 PM 3/20/2002 +, you wrote:
hi

Does anybody know the answer to this:

If you perform a full restore for a very large tsm client and that clients
data spans several tape media (say 4 volumes, even with collocation on)
would tsm try and mount all 4 volumes to 4 tape drives and then run the
restore or would it call the 4 volumes sequentially to 1 tape drive as and
when
until the restore is complete?

I'm doing a DR exercise and am trying to workout how many simultaneous
client restores I could do with 4 tape drives.

Any help would be appreciated.

Thanks

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: ADSM sees 'readwrite' as 'offsite'

2002-03-20 Thread William F. Colwell

Jim,

I have seen this also.  I suspect that the tape starts with part2 of a file that
has part 1 on a tape still offsite, or ends with part 1 of a file that has part 2
on an offsite tape.  The server can't switch between onsite and offsite mode
for copypool moves and reclaims to it has to use offsite mode.

At 11:30 AM 3/20/2002 -0600, you wrote:
ADSM Smart People,

AIX:  4.3.2
ADSM: 3.1.2.1 (I know it's old, but we are upgrading!)
==
I have an ongoing problem that I haven't found the
solution to in the ADSM archives.

I will bring a volume onsite, will set that volume
acc=readw (from 'offsite'). So far, so good. However,
when I attempt to move data from that volume to another,
ADSM continues to see it as 'offsite' (even though if
I do: 'q vo acc=readw', the volume will show up in the
list AND if I do: 'q vo acc=offsite', the volume will
NOT show up in that list).

Consequently, in order to do a 'move data', multiple tape
mounts are usually required (from the primary pool)--sometimes
as many as 30 or more tapes to empty the data from one volume!.
This happens to about 1/3 of the volumes I bring onsite.

Obviously, it gets very painful after a while, and I would
love to know if there is a workaround for this little 'problem'.

Thanks.

Jim

Jim Coen
[EMAIL PROTECTED]
Manager, Application Software Services
ADSM Administrator
Listserv Administrator/News Administrator
  
Information Technology Services   Washburn University
785-231-1010 Ext 2304 Topeka, KS 66621

Wisdom has two parts: 1) Having a lot to say
  2) Not saying it

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: TSM SQL Script

2002-03-20 Thread William F. Colwell

Jack - I think you should just remove the whole 'group by' clause.
You are not selecting a column function such has count(*) or avg(column).

Unless you have a very small database or a very fast machine,
this query will run a long time.

Hope this helps,

Bill

At 12:10 PM 3/20/2002 -0600, you wrote:
Yep, I am an SQL newbie, but some help would be nice.
The script at the bottom terminates with this message:
  ANR2938E The column 'VOLUME_NAME' is not allowed in this context; it must
either be named in the GROUP BY clause or be nested within an aggregate
function.
and points to the left(volume_name, 8) portion of the query.  I have played
with it
and still get this message.  Could someone please explain why this is
occuring and
how I can fix my script?

TIA ... Jack

/* Tape-Volumes */
/* Listing is for all known nodes, and the tape volumes required to do a
restore */
set sqldatetimeformat i
set sqldisplaymode w
set sqlmathmode r
commit
select distinct copy_type, left(node_name, 12), -
   left(volume_name, 8), left(stgpool_name, 16) -
  from adsm.volumeusage  -
  group by node_name, copy_type

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Delete filespace issue with Server v4.2.1.6?

2002-03-14 Thread William F. Colwell

Dale, yes, I have seen this.  The problem is with unicode support.  Do
'help q fi' and 'help del fi' to see the new parameters that may be required.
For instance, here is a 'q fi' of my own machine (winxp, client 4.2.1.26, server at 
4.2.1.9).
Note the namet parameter and see what happens without it.

tsm: NEW_ADSM_SERVER_ON_PORT_1600q fi wfc1450 \\wfc1450\c$ namet=uni f=d

  Node Name: WFC1450
 Filespace Name: \\wfc1450\c$
 Hexadecimal Filespace Name: 5c5c776663313435305c6324
   FSID: 17
   Platform: WinNT
 Filespace Type: NTFS
  Is Filespace Unicode?: Yes
  Capacity (MB): 28,058.8
   Pct Util: 33.9
Last Backup Start Date/Time: 03/13/2002 21:01:44
 Days Since Last Backup Started: 1
   Last Backup Completion Date/Time: 03/13/2002 21:03:10
   Days Since Last Backup Completed: 1
Last Full NAS Image Backup Completion Date/Time:
Days Since Last Full NAS Image Backup Completed:


tsm: NEW_ADSM_SERVER_ON_PORT_1600q fi wfc1450 \\wfc1450\c$ f=d
ANR2034E QUERY FILESPACE: No match found using this criteria.
ANS8001I Return code 11.

You can delete by fsid now, this may be what you want.


Hope this helps,

Bill Colwell

At 08:24 AM 3/14/2002 -0500, you wrote:
Has anyone seen this on 4.2.1.6, or am I just sleep-deprived?
Server is on AIX 4.3 ...
Neither query nor delete will match a file system name



tsm:ADSMq filespace WINNODE

Node Name   Filespace   FSID Platform Filespace Is Files- Capacity   Pct
Name  Typepace(MB)  Util
Unicode?
--- ---   - -  -
WINNODE  \\WINNODE\c$1 WinNTNTFS Yes17,233.8  19.6
WINNODE  \\WINNODE\d$2 WinNTNTFS Yes184,260.  22.2
 9
WINNODE  \\WINNODE\e$3 WinNTNTFS Yes17,273.0  62.6
WINNODE  \\WINNODE\f$4 WinNTNTFS Yes17,273.0  17.1
WINNODE  SYSTEM 5 WinNTSYSTEM   Yes 0.0   0.0
 OBJECT

tsm: ADSMdele filespace WINNODE \\WINNODE\e*
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

tsm: ADSMdele filespace WINNODE \\WINNODE\e$*
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

tsm: ADSMdele filespace WINNODE \\WINNODE\e
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

tsm: ADSMdele filespace WINNODE WINNODE\e*
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

tsm: ADSMdele filespace WINNODE WINNODE\e$*
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

tsm: ADSMdele filespace WINNODE WINNODE\e
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

tsm: ADSMdele filespace WINNODE e*
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

tsm: ADSMdele filespace WINNODE e$*
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

tsm: ADSMdele filespace WINNODE e
ANR0852E DELETE FILESPACE: No matching file spaces found for node WINNODE.
ANS8001I Return code 11.

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: logmode/bufpoolsize/mpthreading

2002-02-28 Thread William F. Colwell

Hi Joe,

At 03:39 PM 2/28/2002 -0500, you wrote:
Environment:
Server: 4.1.3.0
Platform: S390

1.  Logmode: We're going to change the Logmode from Normal to Roll Forward.  What 
determines the amount of disk I'll require?

You need enough log space to hold all the db changes between scheduled db backups.  So 
it depends
on how much activity you have and when you want db backups to occur.  You should also 
set the
dbbackuptrigger as a safety valve; it will kick off a dbb when the log gets nn% full.  
I am on s/390 also;
I set the trigger up to direct incremental dbb's to sms managed disk.  My log is 7,146 
megs; if you upgrade
the server code 4.2.(1.9) you can make the log more than 5.3GB, up to 13GB.

You can do a 'q log f=d' and monitor the cumulative consumption value to see how much 
you
need for a day or between scheduled db backups.

2.  Bufpoolsize: We're going to increase from 24576K to ?.  What's the determining 
factor?

Monitor the cache hit % in 'q db f=d'.  The rule of thumb is to get it up to 98%.  My 
bufpoolsize is
262144k.  You can dynamically changes this with the setopt command.


3.  Mpthreading:  We're going to turn it on.  Are there any considerations I should 
concern myself with?

Can't say much about mpthreading; my processor has only 1 engine, but tivoli 
recommended it
anyway, so I turned it on.

None of this info is in the manual that I'm aware of.  I get a log of try this form 
Tivoli support.  Unfortunately, I don't work in an environment where I can try this 
without first knowing what
the repercussions are.


Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: space consumed by outlook files

2002-02-22 Thread William F. Colwell

Holly,

If your users leave outlook running very little space is consumed in tsm because
the client can't access the .pst file.  I have been searching for a way to get outlook
stopped by the scheduler preschedcommand.  Has anyone figured out how to
do this yet?

We don't use outlook for mail and most users don't use it at all.  But some of my
users use it for calender and contacts etc. and they don't stop it before they
go home so the backups always fail.

To see if this is happening you can search the actlog, if you are receiving client
messages (I am, but I forget how to start it).  In dsmadmc -

set sqldisplaymode wide
select * from actlog where message like 'ANE4007E%Outlook.pst%' 
c:\failedoutlookbackups.txt

Hope this helps,

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

At 02:53 PM 2/22/2002 -0500, you wrote:
We are trying to determine how much of our backup data is PST files.  Can
this be determined from the TSM database?  Thanks.

Holly L. Peppers
Bluse Cross  Blue Shield of Florida
[EMAIL PROTECTED]


Blue Cross Blue Shield of Florida, Inc., and its subsidiary and
affiliate companies are not responsible for errors or omissions in this e-mail 
message. Any personal comments made in this e-mail do not reflect the views of Blue 
Cross Blue Shield of Florida, Inc.



Re: 4.2 server memory corruption

2002-02-20 Thread William F. Colwell

Tom, you should think seriously about upgrading to 4.2.19.
I have experienced problems like you describe, but not since I went to 4.2.1.6;
since then I have gone up to .9 without problems.

Also, I used to run with region=0m in the JCL for tsm.  While working with level
2 on a problem like what you had, I was told to specify a region size because
during initialization tsm takes some small percent of the region and formats it
as cellpools to avoid getmains and freemains; with region=0m it using a default
which was too small.

The site to download 4.2.1.9 is NOT an anonymous FTP site, you need an id and
password which at one time was listed in the read-me that goes into sanrsamp
during the smp apply, or you can get it from Tivoli support.



At 03:41 PM 2/20/2002 -0500, you wrote:
We have a 4.2.0.0 server running under OS/390. We recently started seeing
increasingly serious performance problems with no obvious cause. The server
froze up entirely this morning. I had some difficulty shutting it down. The
server accepted a 'halt' command and generated an 'ANR5963I ADSM server
termination complete' complete message fairly rapidly. However, the started
task continued to run, and continued to use large amounts of CPU time, even
after the message. I finally cancelled the started task. The server came
back up without incident and is now performing much better. One of my
co-workers noted that the server had been up for 32 days, which was by far
the longest stretch of continued uptime since we went to 4.2 (we had
previously IPLed OS/390 every week or two for reasons unrelated to TSM).
This raises the suspicion of some kind of memory corruption that causes
server performance to degrade over time. Is the 4.2.0.0 server known to
suffer from any problems of this sort?

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Clientoptset

2002-02-15 Thread William F. Colwell

Oliver - the message means that the node has the compression option set to
no.  Update the node and set compression to client, which lets the dsm.opt on the 
client
choose, or you clientoptionset, or set  compression to yes which forces compression on.

Hope this helps,

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

At 07:48 AM 2/15/2002 +0100, you wrote:
I defined a client option set with the parameter compress=yes, force yes.
Then I associate a node with this client option but when I start an backup
at the node, the system tells me that the compression is forced off by the
server.

Why ?

Mit freundlichen Grüßen

Oliver Martin

Hypo Informatik-
Gesellschaft m.b.H.
Telefon: +43(0)5574/414-145 



Re: How can I implement Backup Sets?

2002-02-15 Thread William F. Colwell

Joe, you can define a deviceclass with type=file, maxcap=512m.
Then you can generate the backupsets to the new deviceclass.
The backupset will be written to a one or more dynamically allocated
files on your sms managed disks.  When the backup set is done you can
either FTP the files to the secondary server, our you can FTP them
locally to a machine with a cd burner, burn the cd's and mail them.

Since the restore will be done without accessing the server, the name
created doesn't have to be preserved, so you can rename the
files to bset.f1, bset.f2 etc. to make it easier for the people doing the restore.

Hope this helps,

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


At 01:49 PM 2/15/2002 -0500, you wrote:
TSM v413.0 running on os/390
primarily backing up NT4.0 and ADSMv3.1.06

Situation:
We backup branch offices spread thru out the US.  Each branch office has a primary 
server and an exact duplicate server running the os and the apps (no user data).  
Frequently, we will be notified
ahead of time (days) that the primary server needs to be rebuilt.  This requires us 
to restore the user data upon completion.  At times it can be as much as 13G going 
across fractional T1 or T3.  Due
to the network, this can be a really time consuming restore.

Question:
Is there a way I can utilize the secondary server as a medium to create Backup Sets 
on?
There are no local devices (zip, jazz etc...) at these branch offices.  Couldn't use 
anyhow because there is no personnel at these branch offices to physically load the 
media.  Any suggestions as to
how I can use the secondary server to put backupsets on and greatly reduce my restore 
time to the primary server?

Regards, Joe



Re: WG: subfile backup configuration

2002-02-13 Thread William F. Colwell

Hi,

I am just starting to roll out 4.2.1.20 win clients.  I will use both journalling
and subfile backups.  Every client gets a special schedule that will run once
every 4 weeks to force a non journaled backup and non subfile backup.  Here
is one of the schedules -

def schedule standard late_evening_wxp_nojrnl1a -
 act=incremental -
 opt=-nojournal -subfilebackup=no -
 startd=02/05/2002 -
 dur=150 startt=23:45 duru=m period=28 -
 peru=d dayofweek=any

Hope this helps,

Bill Colwell

At 05:52 PM 2/12/2002 -0500, you wrote:
I believe if you delete the /cache subdirectory on the client, it will force
the next backup to send a new base file.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

Intelligence has much less practical application than you'd think -
Scott Adams/Dilbert







-Original Message-
From: Luecke, Wolf-Christian (K-DOI-44)
[mailto:[EMAIL PROTECTED]]
Sent: Monday, February 11, 2002 9:04 AM
To: [EMAIL PROTECTED]
Subject: AW: WG: subfile backup configuration


Hi Thomas-Anton,

Thanks for your answer.
I know the most functionalities of the subfile backup and its
working-procedures but it would be
fantasic when you could send me your document. I 'm shure there are some
decribtions and settings I don 't know.
We have here a lot of IBM-Teachers and one of them told us, that there is a
possibility to set the thresholds of the
subfile backup customized.

It would be very important for us so define the automaticly new basefiles on
seperat days, inpedendent of the adaptive
delta-files. But I think there is no way for us to realize that.

best regards

Mit freundlichen Grüßen

Wolf-Christian Lücke

K-DOI-44
Volkswagen AG Wolfsburg
Storage Management
e-business

Tel.:   05361/9-22124
E-Fax:  05361/9-57-22124

E-Mail: mailto:[EMAIL PROTECTED]

 -Ursprüngliche Nachricht-
 Von:  Thomas Anthon-Kiel [SMTP:[EMAIL PROTECTED]]
 Gesendet am:  Montag, 11. Februar 2002 14:18
 An:   [EMAIL PROTECTED]
 Betreff:  Re: WG: subfile backup configuration
 
 Hi Wolf-Christian,
 
 if you have included your .pst files to subfile backup the TSM client is
 using 
 the subfile feature in the following way. At the first time TSM is sending
 the 
 base file to the server. Afterwards the changes will be send as a delta
 file by 
 byte level (1 kB to 3 MB) or block level (3 MB to 2 GB) to the server.
 This 
 delta files are used in a differential way. A maximum of 20 delta files
 can be 
 made before TSM automatically is generating a new base file (full backup) 
 again. But if the delta file exceeds 60 percent of the size of the base
 file a 
 new base file will be made!!! That means, you have no control about the
 full 
 backup!!! This will be made automatically by TSM if one of the above 
 requirements is true.
 This is the reason why I didn´t work much with subfile backup.
 If you need some more detailed informations send me a short mail and I can
 send 
 you a document with a more detailed describtion of the subfile process.
 
 Many greeting
 
 Thomas
 
 Zitiere Luecke, Wolf-Christian (K-DOI-44) wolf-
 [EMAIL PROTECTED]:
 
   Hi,
   
   to backup the pst-files of our nt-servers we use the tsm-software
  4.2.1.16
   and with that the feature adaptive subfile backup.
   It works quiet well but I want to know how I can set manualy the
  boarders,
   when automaticly  
   the adaptive subfile backup changes into a fullbackup.
   The second question is, can I prepare the adaptive subfile backup so,
  that
   the next coming fullbackup
   is everytime at the weekend (timestamp)?
   
   best regards
   
   Mit freundlichen Grüßen
   
   Wolf-Christian Lücke
   
   K-DOI-44
   Volkswagen AG Wolfsburg
   Storage Management
   e-business
   
   Tel.: 05361/9-22124
   E-Fax:05361/9-57-22124
   
   E-Mail:   mailto:[EMAIL PROTECTED]
   
  

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: RESTORE VOLUMES

2002-02-13 Thread William F. Colwell

Gary  -

you can get the volume names for the volumeusage table.
Here is a macro to do the select;  cut and paste the macro to a file,
then in dsmadmc enter

macro file_name 'NODE_NAME' = 'filespace_name'.  The output
will be in the file c:\tsmvolumes.txt.

Hope this helps.

/*  */
/* macro file to select the volumes that a node */
/* is stored on.*/
/*  */
set sqldatetimeformat i
set sqldisplaymode w
set sqlmathmode r
commit
select distinct copy_type, left(node_name,16), left(volume_name,16), -
left(stgpool_name,16) -
   from adsm.volumeusage  -
   where node_name = %1 -
 and copy_type in ('BACKUP', 'ARCHIVE') -
 and filespace_name %2 %3 -
  c:\tsmvolumes.txt



At 08:01 PM 2/13/2002 +1100, you wrote:
Hi All,

I am quite new to all things TSM and I have a question
to what I believe is an unrealistic situation.

 Presently if, during my role as the TSM administrator I am asked to
perform a data restore, I have no idea of what volumes will be required
for the data.  I have a 30 slot library at my disposal, which I
appreciate is quite small, however, if after  I kick off a restore I
should have some indication as to what volumes are required thereby
allowing me to check the volumes into the library before the restore job
commences.

A recent example:  I had to restore 80MB worth of data and it
took over 3.5 hours and over 18 tape changes.  The present situation is
that after I start a restore I have to be glued to the console and wait
for tape requests to appear in the activity log or via a pop-up.

My research has shown that other people have been asking the
same thing as far back as 1998 but no-one appears to have provided a
solution.  Tivoli have told me that it is possible with some pretty
complex SQL statements.  Great now I have to learn SQL queries as well.

Does anybody have any ideas on this matter?  It's not that hard
surely..Oh yeah I'm running TSM Server 4.2.10 on a W2K platform with
current clients running 4.2.1.




Gary Swanton
Microsoft Certified Systems Engineer
[EMAIL PROTECTED]

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Querying Client's Management Class

2002-01-29 Thread William F. Colwell

Mobeen,

I suggest this query, but it will be very long running.

First in dsmadmc do 'q co * active * t=b'.  Record the names of the management
classes that have the retextra or retonly values you are interested in.

Then submit this query -

select * from backups where class_name in ('mg1', 'mg2', )
where mg1, mg2 etc are the names you got from the query copygroup.

For archives do 'q co * active * t=a' and select * from archives ...

Hope this helps,

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

At 05:01 PM 1/29/2002 +0800, you wrote:
The query below is giving me the data about the management classes defined
in the domains with a retention over 5 yrs. This information is right. But
it need not necessarily mean that if an MC is defined in a domain its being
used by a node.

I think i am confusing people here. I shall try to put it in simple terms

1. I have 150 clients, say 50 NT Client, 50 UNIX Clients and 50 VMS Clients

2. I have been doing archives and backups on these clients.

3. There are 3 domains namely NT_DOMAIN, UNIX_DOMAIN and VMS_DOMAIN. All the
50 NT clients
   are defined in NT_DOMAIN, 50 Unix clients in UNIX_DOMAIN and 50 VMS
Clients in VMS_DOMAIN.

4. Each of the 3 active policy domains mentioned above has multiple
management classes in addition to the
   DEFAULT_MC. Each of these active policy domains has MCs with retentions
like 1yr, 2yr, 3yr and 5yrs while
   the DEFAULT_MC has a retention period of 180 days.

5. Many of the NT, UNIX, and VMS clients backup/archive data. For example a
UNIX node might backup the
   data using DEFAULT_MC (180 days retention) and the same node for some
filesystems might apply a
   retention period of 5yrs (this is done using include/exclude).

Now coming to my question

How do i find out which are the nodes in NT, VMS and UNIX domains that have
actually data backed up using management class using retention of 5yrs.

The query given below (by seay) is looking at the policy domains to
determine this. But some times there might be a situation where a 5yr MC
might be defined in the policy domain but it might not be used by some
nodes.

I would appreciate your help in answering this.

Thanks again
Mobeen


-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 29, 2002 3:12 PM
To: [EMAIL PROTECTED]
Subject: Re: Querying Client's Management Class


select node_name, domain_name from nodes where domain_name in (select
domain_name from bu_copygroups where integer(retextra) 365*4 and retextra
'NOLIMIT' or retextra = 'NOLIMIT' or integer(verexists) 365*4 and
verexists 'NOLIMIT')

This looks at all the management classes in a domain and plays some games.
You need to verify that it works correctly.  What it does is selects the
domains with management classes that meet the criteria and matches against
the node definitions to give you a list.

-Original Message-
From: mobeenm [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 29, 2002 1:14 AM
To: [EMAIL PROTECTED]
Subject: Re: Querying Client's Management Class


I would like to know the nodes using a particular management class. I have
about 150 clients belonging to NT_DOMAIN, UNIX_DOMAIN and VMS_DOMAIN. Now i
would like to know which of these clients is backing up data with a
retention of more than say 4yrs? Is there any way to do this.

A dumb way to do this is probably go to each client node and check its
include/exclude/dsm.sys to find out the retention. But what i would like to
do is, run a query to find out all the filespaces on a node that have a
retention of greater than 4yrs and i would like to do this to all the nodes
in the 3 domains mentioned above.

Thanks
Mobeen

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 29, 2002 10:47 AM
To: [EMAIL PROTECTED]
Subject: Re: Querying Client's Management Class


You realize that a management class is part of policy set and that you can
set which management class is the default management class.

So, what is your real question, do you want to know which is the default
management class for each node or every management class in the active
policy set for the domain.  Do you want the active policy set or every
policy set?

I will not say this can be done, but your question is not clear enough.
Unfortunately, it does not appear that inner join is supported.

-Original Message-
From: mobeenm [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 28, 2002 8:21 PM
To: [EMAIL PROTECTED]
Subject: Querying Client's Management Class


Hello TSMers,
I want to find out all the clients registered on my TSM for their management
classes. Is there a query that i can do? Appreciate your response.

Regards
Mobeen
[EMAIL PROTECTED]



Re: Odd messages at the end of dsmsched.log

2002-01-29 Thread William F. Colwell

Andy - here are some queries to answer your questions.

tsm: NEW_ADSM_SERVER_ON_PORT_1600q n wfc1450

Node Name Platform Policy Domain  Days Since Days Since Locked?
   Name   Last Acce-   Password
  ssSet
-  -- -- -- ---
WFC1450   WinNTCSDL_STANDARD1 1 11   No

tsm: NEW_ADSM_SERVER_ON_PORT_1600q sched n=wfc1450

Domain   * Schedule NameAction Start Date/Time  Duration Period Day
 -  --   -- ---
CSDL_STANDA-   EVENING  Inc Bk 05/18/1998 18:30:00  6 H1 D  Any
 RD1

tsm: NEW_ADSM_SERVER_ON_PORT_1600q sched csdl_standard1 evening f=d

Policy Domain Name: CSDL_STANDARD1
 Schedule Name: EVENING
   Description:
Action: Incremental
   Options:
   Objects:
  Priority: 5
   Start Date/Time: 05/18/1998 18:30:00
  Duration: 6 Hour(s)
Period: 1 Day(s)
   Day of Week: Any
Expiration:
Last Update by (administrator): WFC1450
 Last Update Date/Time: 05/04/1999 13:50:43
  Managing profile:

So the schedule doesn't include any special objects.

I am assuming that this is something to do with the journal.  I expect
things in c:\mail could change while the backup was going on, and the journal process
sent in some late requests.  I know you are going to say 'no way' so maybe this is
a new bug.

Bill

At 11:57 AM 1/29/2002 -0500, you wrote:
Very odd. What does the schedule definition look like? The successful
incremental backup messages would suggest that the schedule includes file
specs for the following:

   c:
   c:\Program Files\SQLLIB\java\java12\jdk\bin\_fty0.231\*
   c:\mail\LISTS.fol\descmap.pce\*
   c:\mail\descmap.pce\*

Is there anything a the top of dsmsched.log to suggest that backups for
multiple file specs are being initiated?

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




William F. Colwell [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
01/29/2002 09:01
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Odd messages at the end of dsmsched.log



I just got a new pc last week, it is running winXP.  I have the 4.2.1.20
client with journalling and subfile backups enabled.  I examined the
dsmsched.log and found these odd messages at the end.  My question is why
after the message 'successful incremental backup of \\wfc1450\c$'  are
there more expiring and backup messages?

01/28/2002 18:47:05 Directory--   0 \\wfc1450\c$\program
files\Tivoli\TSM\baclient [Sent]
01/28/2002 18:47:05 Normal File--22 \\wfc1450\c$\s3n4
[Sent]
01/28/2002 18:47:06 Successful incremental backup of '\\wfc1450\c$'

01/28/2002 18:47:19 Expiring--0 \\wfc1450\c$\Program
Files\SQLLIB\java\java12\jdk\bin\_fty0.231 [Sent]
01/28/2002 18:47:19 Successful incremental backup of '\\wfc1450\c$\Program
Files\SQLLIB\java\java12\jdk\bin\_fty0.231\*'

01/28/2002 18:47:19 Expiring--0
\\wfc1450\c$\mail\LISTS.fol\descmap.pce [Sent]
01/28/2002 18:47:19 Successful incremental backup of
'\\wfc1450\c$\mail\LISTS.fol\descmap.pce\*'

01/28/2002 18:47:20 Expiring--0
\\wfc1450\c$\mail\descmap.pce [Sent]
01/28/2002 18:47:20 Successful incremental backup of
'\\wfc1450\c$\mail\descmap.pce\*'

01/28/2002 18:47:35 --- SCHEDULEREC STATUS BEGIN
01/28/2002 18:47:35 Total number of objects inspected:   15,691
01/28/2002 18:47:35 Total number of objects backed up:   15,613
01/28/2002 18:47:35 Total number of objects updated:  0
01/28/2002 18:47:35 Total number of objects rebound:  0
01/28/2002 18:47:35 Total number of objects deleted:  0
01/28/2002 18:47:35 Total number of objects expired:  2,221
01/28/2002 18:47:35 Total number of objects failed:   0
01/28/2002 18:47:35 Total number of bytes transferred:   250.00 MB
01/28/2002 18:47:35 Data transfer time:   17.16 sec
01/28/2002 18:47:35 Network data transfer rate:14,915.15 KB/sec
01/28/2002 18:47:35 Aggregate data transfer rate:260.93 KB/sec
01/28/2002 18:47:35 Objects compressed by:   36%
01/28/2002 18:47:35 Elapsed processing time:   00:16:21
01/28/2002 18:47:35 --- SCHEDULEREC STATUS END
01/28/2002 18:47:36 --- SCHEDULEREC OBJECT END EVENING 01/28/2002 18:30

Re: Odd messages at the end of dsmsched.log

2002-01-29 Thread William F. Colwell

Andy - here is more from the start of the scheduled backup.  AS tou can see
it is using the journal, and I have a preschedcmd to backup the registry  complusdb
in place of allowing the full system objects backup to run.

Bill

01/28/2002 18:29:57 Scheduler has been started by Dsmcad.
01/28/2002 18:29:57 Querying server for next scheduled event.
01/28/2002 18:29:57 Node Name: WFC1450
01/28/2002 18:29:57 Session established with server NEW_ADSM_SERVER_ON_PORT_1600: MVS
01/28/2002 18:29:57   Server Version 4, Release 2, Level 1.7
01/28/2002 18:29:57   Server date/time: 01/28/2002 18:30:47  Last access: 01/28/2002 
18:30:47

01/28/2002 18:29:57 --- SCHEDULEREC QUERY BEGIN
01/28/2002 18:29:57 --- SCHEDULEREC QUERY END
01/28/2002 18:29:57 Next operation scheduled:
01/28/2002 18:29:57 
01/28/2002 18:29:57 Schedule Name: EVENING
01/28/2002 18:29:57 Action:Incremental
01/28/2002 18:29:57 Objects:
01/28/2002 18:29:57 Options:
01/28/2002 18:29:57 Server Window Start:   18:30:00 on 01/28/2002
01/28/2002 18:29:57 
01/28/2002 18:29:57
Executing scheduled command now.
01/28/2002 18:29:57
Executing Operating System command or script:
   dsmc backup registry  dsmc backup complusdb
01/28/2002 18:31:14 Finished command.  Return code is:
   0
01/28/2002 18:31:14 --- SCHEDULEREC OBJECT BEGIN EVENING 01/28/2002 18:30:00
01/28/2002 18:31:15 Incremental backup of volume '\\wfc1450\c$'
01/28/2002 18:31:16 Using journal for '\\wfc1450\c$'



- - -

At 11:57 AM 1/29/2002 -0500, you wrote:
Very odd. What does the schedule definition look like? The successful
incremental backup messages would suggest that the schedule includes file
specs for the following:

   c:
   c:\Program Files\SQLLIB\java\java12\jdk\bin\_fty0.231\*
   c:\mail\LISTS.fol\descmap.pce\*
   c:\mail\descmap.pce\*

Is there anything a the top of dsmsched.log to suggest that backups for
multiple file specs are being initiated?

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




William F. Colwell [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
01/29/2002 09:01
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Odd messages at the end of dsmsched.log



I just got a new pc last week, it is running winXP.  I have the 4.2.1.20
client with journalling and subfile backups enabled.  I examined the
dsmsched.log and found these odd messages at the end.  My question is why
after the message 'successful incremental backup of \\wfc1450\c$'  are
there more expiring and backup messages?

01/28/2002 18:47:05 Directory--   0 \\wfc1450\c$\program
files\Tivoli\TSM\baclient [Sent]
01/28/2002 18:47:05 Normal File--22 \\wfc1450\c$\s3n4
[Sent]
01/28/2002 18:47:06 Successful incremental backup of '\\wfc1450\c$'

01/28/2002 18:47:19 Expiring--0 \\wfc1450\c$\Program
Files\SQLLIB\java\java12\jdk\bin\_fty0.231 [Sent]
01/28/2002 18:47:19 Successful incremental backup of '\\wfc1450\c$\Program
Files\SQLLIB\java\java12\jdk\bin\_fty0.231\*'

01/28/2002 18:47:19 Expiring--0
\\wfc1450\c$\mail\LISTS.fol\descmap.pce [Sent]
01/28/2002 18:47:19 Successful incremental backup of
'\\wfc1450\c$\mail\LISTS.fol\descmap.pce\*'

01/28/2002 18:47:20 Expiring--0
\\wfc1450\c$\mail\descmap.pce [Sent]
01/28/2002 18:47:20 Successful incremental backup of
'\\wfc1450\c$\mail\descmap.pce\*'

01/28/2002 18:47:35 --- SCHEDULEREC STATUS BEGIN
01/28/2002 18:47:35 Total number of objects inspected:   15,691
01/28/2002 18:47:35 Total number of objects backed up:   15,613
01/28/2002 18:47:35 Total number of objects updated:  0
01/28/2002 18:47:35 Total number of objects rebound:  0
01/28/2002 18:47:35 Total number of objects deleted:  0
01/28/2002 18:47:35 Total number of objects expired:  2,221
01/28/2002 18:47:35 Total number of objects failed:   0
01/28/2002 18:47:35 Total number of bytes transferred:   250.00 MB
01/28/2002 18:47:35 Data transfer time:   17.16 sec
01/28/2002 18:47:35 Network data transfer rate:14,915.15 KB/sec
01/28/2002 18:47:35 Aggregate data transfer rate:260.93 KB/sec
01/28/2002 18:47:35 Objects compressed by:   36%
01/28/2002 18:47:35 Elapsed processing time:   00:16:21
01/28/2002 18:47:35 --- SCHEDULEREC STATUS END
01/28/2002 18:47:36 --- SCHEDULEREC OBJECT END EVENING 01/28/2002 18:30:00
01/28/2002 18:47:36 Scheduled event 'EVENING' completed successfully.
01/28/2002 18:47:36 Sending results for scheduled event 'EVENING'.
01/28/2002 18:47:37

Re: Empty volume is not empty problem.

2002-01-28 Thread William F. Colwell

Try Wanda's method first.  If it doesn't work try marking the volume 'destroyed'
and the restore the volume from the copypool (assuming you have one).

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

At 04:24 PM 1/28/2002 -0500, you wrote:
This is a common problem on TSM 3.7 and 4.1.

Run your AUDIT command again and add FIX=YES.

That usually clears up the problem.

-Original Message-
From: Alan Davenport [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 28, 2002 3:22 PM
To: [EMAIL PROTECTED]
Subject: Empty volume is not empty problem.


Hello *SM'ers. I have an odd problem. I have a volume that shows as full but
is 0% utilized and has 100% reclaimable space. Audit volume states missing
or incorrect information detected. Query content shows no content, move
data says volume is empty. Delete volume says the tape still contains data.
I'm at a loss as to how to proceed. Does anyone have any advice for me?

We are running TSM 4.1.0.0 on OS390.

 Look below for actlog displays on the issue:


Date/TimeMessage


--

01/28/2002 14:58:40  ANR2017I Administrator SERVER_CONSOLE issued
command:
  AUDIT VOLUME 201221

01/28/2002 14:58:40  ANR1199I Removable volume 201221 is required for
audit
  process.

01/28/2002 14:58:40  ANR5216I CARTRIDGE 201221 is expected to be mounted
(R/O).
01/28/2002 14:58:40  ANR2333W Missing or incorrect information detected
by
  AUDIT VOLUME for volume 201221.

01/28/2002 14:59:41  ANR2017I Administrator DVNPORT issued command:
QUERY
  VOLUME 201221 f=d

   Volume Name: 201221
 Storage Pool Name: COPYVAULT
 Device Class Name: CARTVAULT
   Estimated Capacity (MB): 1,639.2
  Pct Util: 0.0
 Volume Status: Full
Access: Read/Write
Pct. Reclaimable Space: 100.0
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 11
 Write Pass Number: 1
 Approx. Date Last Written: 01/23/2001 12:45:47
Approx. Date Last Read: 01/19/2001 14:15:02
   Date Became Pending:
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location:
Last Update by (administrator): SERVER_CONSOLE
 Last Update Date/Time: 01/28/2002 14:58:37


01/28/2002 15:00:06  ANR2017I Administrator DVNPORT issued command:
DELETE
  VOLUME 201221 discard=yes

01/28/2002 15:00:07  ANR2406E DELETE VOLUME: Volume 201221 still
contains data.

01/28/2002 15:00:39  ANR2017I Administrator DVNPORT issued command:
QUERY
  CONTENT 201221

01/28/2002 15:00:39  ANR2034E QUERY CONTENT: No match found using this
criteria.

01/28/2002 15:01:14  ANR2017I Administrator DVNPORT issued command: MOVE
DATA
  201221

01/28/2002 15:01:14  ANR2232W This command will move all of the data
stored on
  volume 201221 to other volumes within the same
storage
  pool; the data will be inaccessible to users until
the
  operation completes.

01/28/2002 15:01:15  ANR2017I Administrator DVNPORT issued command: MOVE
DATA
  201221

01/28/2002 15:01:15  ANR2209W Volume 201221 contains no data.



Alan Davenport
Senior Storage Administrator
Selective Insurance
[EMAIL PROTECTED]
(973) 948-1306



Re: compression

2002-01-25 Thread William F. Colwell

Joe - if the 2 compresion processes are both software, the yes, the output of the 2nd 
compression
will be large than its input.  But if the 2nd compression is being done by a tape 
drive, the drive will be
smart enough to stop the compression if the file grows.  It least that is the case for 
3590  9840.

I am doing exactly what you are doing and I get 19.2 GB on a 9840, which is the 
nominal uncompressed
capacity.

Bill Colwell


At 01:00 PM 1/25/2002 -0500, you wrote:
Currently running TSM v 4.1.3 on System\390.  Run compression on the client as well 
as harware compression on Device type Cartridge and 3590.  Ideally this is not the 
way to go... but we need to run
compression on the client due to network constraints.  Is it a FACT that running a 
compression routine 2x will ultimately double the size of the original file?  Where 
can I find more info on this?
Any suggestions would be appreciated.

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Database Specs

2002-01-25 Thread William F. Colwell

Joe - the number of volumes isn't the problem.  You need to increase the bufpoolsize 
in the server
dsm.opt file.  Here is a 'q db f=d' from my system.  The bufpoolsize parameter becomes 
the
'buffer pool pages' value in the q db.

Bill Colwell
- - -
tsm: NEW_ADSM_SERVER_ON_PORT_1600q db f=d

  Available Space (MB): 166,860
Assigned Capacity (MB): 166,860
Maximum Extension (MB): 0
Maximum Reduction (MB): 18,640
 Page Size (bytes): 4,096
Total Usable Pages: 42,716,160
Used Pages: 37,947,974
  Pct Util: 88.8
 Max. Pct Util: 88.8
  Physical Volumes: 45
 Buffer Pool Pages: 65,536
 Total Buffer Requests: 194,057,057
Cache Hit Pct.: 98.44
   Cache Wait Pct.: 0.00
   Backup in Progress?: No
Type of Backup In Progress:
  Incrementals Since Last Full: 5
Changed Since Last Backup (MB): 1,542.07
Percentage Changed: 1.04
Last Complete Backup Date/Time: 01/24/2002 15:49:51
- - -
At 12:54 PM 1/25/2002 -0500, you wrote:
I'm running TSM v 4.1.3 on System\390.  My database is 47G and spread across 25 
physical volumes (gets a 78% cache hit / not so great).

One of my performance and tuning books recommends not spreading your database over 
more than 12 physical volumes.  Could anyone explain why?  What kind of 
performance/utilazation hit am I going to
incur by having my db spread across 25 volumes?

Regards,
Joe Wholey
TGA Distributed Data Services
Merrill Lynch
Phone: 212-647-3018
Page:  888-637-7450
E-mail: [EMAIL PROTECTED]

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: Tivoli Decision Support and TSM server

2002-01-23 Thread William F. Colwell

Christo,

Can you teel what sql query is running?  I know some queries are
really terrible, especially 'select * from volumeusage.  Is there some
options in the TDSSMA to limit what queries are done?

I run a large tsm erver on OS/390 also.  I can do a select of
all the occupancy data in 9 minutes elapsed time.  I can't compare
the performance with w2k or aix because I don't run those servers,
but I am happy with the perfromance on os/390.

But I agree with you about TDSSMA - if it drives the server crazy it
isn't a good product.


At 12:53 PM 1/23/2002 +0200, you wrote:
Hi,

Are there any of you guys/gals that are running TDS for SMA?
If you are - what kind of TSM server are you running:
Platform and size of database being the ones I'll be interested
in.
We are busy evaluating the TDSSMA modules and are experiencing
bad performance on our biggest TSM server - for those of you
that are familiar with the product - when the TDS Loader runs it
consumes about 24% of our OS/390 CPU for 28hours - the load is still
not finished. This makes the whole TDS product useless to us.

It is fine for our smaller TSM environments but it does not help
getting different products for the smaller and bigger environments.
To me it seems like the usefullness of this product or any other
product that will be running SQL queries into your TSM db is severly
impacted by the size of your TSM environment - coming back to the
scalability of TSM's proprietary black box DB!

Have any of you noticed this or do we need to look at other things.
My opinion of TSM on OS/390 is that it performs worse on OS/390 than
AIX or I'll even be as bold to say Win2K ;-)

Thanks for any input.

Regards
Christo Heuer

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.