TDP for Notes Restore Problem

2004-10-25 Thread Gerald Wichmann
We have Lotus Notes backups done with the TDP 5.1.5 (Notes v5.x). Back
in January the server name was called TEST. In March, the server was
renamed to TEST-OLD and we continued to do backups by creating a new
node TEST. So now we have data from Jan through March under
TEST-OLD and from March on called TEST...

We have not expired any data since since January..

Our normal rotation is to do a weekly full and daily incrementals.
There is a problem though that i'm finding. The first full backup for
TEST was done March 15th. The server rename was done on March 7th. Now
when I look back i see:

March 1st - Full taken now under TEST-OLD
March 2-7 - Incrementals now under TEST-OLD
March 8-14 - Incrementals now under TEST
March 15 - Full taken now under TEST

So the question is whether or not March 8-14 can be restored? It seems
not unless someone has any ideas. TSM thinks there is no full for TEST
prior to the 15th so how can it? Have we lost access to the data on
these dates?


Querying What tapes you need for a restore?

2004-02-19 Thread Gerald Wichmann
Is there a way to query a list of tapes needed for a given restore prior to
doing the restore such that you can ensure all those tapes are in your
library prior to kicking off the restore? I think the normal mode of
operation is if you were to do a restore that involved several tapes and
some tapes were in your library and some were not, it would move along fine
with those tapes it has but the restore would halt and issue a mount request
for those that aren't in the library. I'd like to avoid that mount request
and just ensure all the tapes are in the library beforehand.

Gerald



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Lotus Notes TDP Question

2004-01-26 Thread Gerald Wichmann
When restoring with the Notes TDP, I notice the files according to the GUI
will have one date (e.g. 1/2/02) but when I restore them, they end up with
today's date/time. I.e. the modification date/time is not preserved through
the course of the restore. Is this the normal behavior? I am a bit surprised
by it and wondering if there's some way to preserve it.

Thanks,
Gerald



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Tape Volumes Needed for a Restore

2003-12-01 Thread Gerald Wichmann
If I load the TSM GUI and select a few directories to restore, is there some
way to determine which tapes will be required to satisfy that restore? A
sort of preview=yes option for restores? Or how would you determine which
tapes you need given a point in time restore of an entire server?

Thanks,
Gerald



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Offsite Volume?

2003-11-24 Thread Gerald Wichmann
I have a TSM 5.1.x server where I have a handful of tapes. They are a part
of an offsite storage pool. I do not have the onsite storage pool tapes. If
I try to do a restore of this data, TSM tries to mount the onsite tapes.
What should I do to get TSM to ask for the offsite ones? Don't I just set
the onsite ones to acc=destroyed and the offsite ones to acc=reado and
voila, TSM should ask for the right tape?


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


ANR0836W No query restore

2003-11-24 Thread Gerald Wichmann
When attempting to do a restore I'm getting the messages below.. There is no
other accompanying information. Anyone know what it means? There is nothing
in the messages guide that sheds any light as to this ANR..



ANR0836W No query restore processing session 93 for node MDCTXUDSE261 and
\\mdctxudse261\d$ failed to retrieve file \LOTUS\DOMINO\DATA\W32\CCMAIL.BMP
- file
being skipped.
ANR0836W No query restore processing session 93 for node MDCTXUDSE261 and
\\mdctxudse261\d$ failed to retrieve file \LOTUS\DOMINO\DATA\W32\AUTORUN.BMP
- file
being skipped.
ANR0836W No query restore processing session 93 for node MDCTXUDSE261 and
\\mdctxudse261\d$ failed to retrieve file \LOTUS\DOMINO\DATA\W32\AMIPRO.MAC
- file
being skipped.
ANR0836W No query restore processing session 93 for node MDCTXUDSE261 and
\\mdctxudse261\d$ failed to retrieve file \LOTUS\DOMINO\DATA\W32\AMIPRO.BMP
- file
being skipped.
ANR0836W No query restore processing session 93 for node MDCTXUDSE261 and
\\mdctxudse261\d$ failed to retrieve file
\LOTUS\DOMINO\DATA\W32\AMIMENUS.BMP - file
being skipped.
ANR0836W No query restore processing session 93 for node MDCTXUDSE261 and
\\mdctxudse261\d$ failed to retrieve file \LOTUS\DOMINO\DATA\W32\123W.MAC -
file
being skipped.



Gerald Wichmann
Manager, Systems Engineering
Data Restoration
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Querying the date a file was backed up

2003-11-21 Thread Gerald Wichmann
I've got a volume that I've done a query content on. I can't tell what date
the files were backed up that are on that volume though. A Q content gives
me the node name, type, filespace name, fsid, and client's name for file.
I'm thinking I have to do a select query to pull this off but somehow I need
a method to figure out what date the objects in a query content for a volume
were backed up. Any suggestions?  The closest I've come is I got a ballpark
date by doing a select * from volumes for the volume which tells me when it
was last written to. It doesn't give me the date of an individual item
though. I need a finer level of granularity.

Appreciate the help!

Gerald


Gerald Wichmann
Manager, Systems Engineering
Data Restoration
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Querying the data a file was backed up on a volume

2003-11-06 Thread Gerald Wichmann
I've got a volume that I've don't a query content on. I can't tell what date
the files were backed up that are on that volume though. I'm thinking I have
to do a select query to pull this off but somehow I need a method to figure
out what date the objects in a query content for a volume were backed up.
Any suggestions?  The closest I've come is I got a ballpark date by doing a
select * from volumes for the volume which tells me when it was last written
to. It doesn't give me the date of an individual item though. I need a finer
level of granularity.

Appreciate the help!

Gerald



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Data Protection Agent for Notes

2003-11-05 Thread Gerald Wichmann
Does anyone have a good understanding of the Lotus Notes Data Protection
Agents and their history? From what I can gather there are/were 3 versions..

1.1.x
2.1.x
5.1.x

Similarly there are different versions of notes - 4.x, 5.x, 6.x..

What is restore-compatible with what in this matrix? If I have backups of a
Notes 5.x server that were originally taken with 1.1, can I restore with 2.1
or 5.1 as well?
Gerald Wichmann



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


dsmserv loadformat failure

2003-10-22 Thread Gerald Wichmann
Anyone know what errno=2 means below? The /tsm partition exists and has 777
permissions so I'm thinking it's not a permissions problem. Does it expect
those files to already have been created via dsmfmt? Currently the files do
not exist.

[EMAIL PROTECTED]:/usr/tivoli/tsm/server/bin# dsmserv loadformat 1
/tsm/tsmlog.001 5000 3 /tsm/tsmdb.001 3 /tsm/tsmdb.002 3
/tsm/tsmdb.003 3
ANR7800I DSMSERV generated at 12:37:06 on Aug 21 2002.

Tivoli Storage Manager for AIX-RS/6000
Version 5, Release 1, Level 5.0

Licensed Materials - Property of IBM

5698-ISE (C) Copyright IBM Corporation 1999,2002. All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR0900I Processing options file dsmserv.opt.
ANR0921I Tracing is now active to file /tsm/trace.out.
ANR7811I Direct I/O will be used for all eligible disk files.
Error opening file /tsm/tsmlog.001, errno = 2



Thanks,
Gerald




This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Automatically setting a volume to unavailable

2003-10-10 Thread Gerald Wichmann
Is there a way to make TSM *not* mark a volume unavailable when there's a
problem reading it?

Gerald Wichmann
Manager, Systems Engineering
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM Query to determine what is on a single tape

2003-10-06 Thread Gerald Wichmann
Given a tape, is there a select query that will tell me what servers have
data on that particular tape? It'd be nice to know what file systems as well
but that level of granularity isn't necessary. I.e. I have tape XYZ, I run
query and find out CLIENTA, CLIENTB, and CLIENTC have data on it.


Gerald Wichmann
Manager, Systems Engineering
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Exchange TDP Query Question

2003-10-03 Thread Gerald Wichmann
TSM TDP for Exchange v5.2.1

I'm running the following query using the TDP's cmdline executable -
tdpexcc q tsm * /AL /FROMEXCSERV=*

Would that in theory show me ALL exchange server backups that the TSM
database knows about for ALL exchange servers?

I realize if there are v1.1.1 backups, this 5.2.1 client would not show
those and I'd need to do a similar query with v1.1.1 to see those. But 5.x
is backwards compatible with 2.x so minimum I'm trying to query a complete
and comprehensive list of what backups TSM has for all exchange servers..

Thanks,

Gerald Wichmann
Manager, Systems Engineering
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM TDP for MS Exchange

2003-10-02 Thread Gerald Wichmann
I'm trying to find out what the history of versions are for this TDP..

I currently have an old CD with 1.1.1 on it. I know the current version is
5.2.1 according to passport advantage and that I can also get 5.1.5 from
there. Is there anything between those version? I understand there is a 2.x?

Thanks,

Gerald Wichmann
Manager, Systems Engineering
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Moving TSM from one platform to another

2003-10-01 Thread Gerald Wichmann
Is it possible to move from MVS to AIX without having to export all client
node data so long are your storage device is the same on both environments?

i.e. if my current environment has 2000 tapes full of client data, and I
want to move my environment from MVS to AIX, do I essentially have to export
2000 tapes, create 2000 new tapes, and import those 2000 new tapes to my AIX
environment to read the client data (thus creating another 2000 tapes)? Or
would the present 2000 tapes be readable from the AIX environment as long as
I export the server without the filedata and import it into the AIX
environment (and have the same type of device connected to the AIX server of
course)? In other words I'm trying to gauge how much work it is to move TSM
from one platform to another..

If I remember correctly it's ugly and you do have to export all the client
data as well..



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Exporting a TSM Server Across Platforms

2003-09-26 Thread Gerald Wichmann
If I have a TSM server on an MVS environment backing up Lotus Notes servers
running AIX, and I'd like to migrate my TSM server environment to an AIX
platform, if I recall the steps are fairly straight forward and this is not
a difficult task. Put simply it's:

1.  Do an export of the TSM Server on the MVS environment to tape.
2.  Move the library to the new AIX server
3.  Do an import of the previously made export on the new AIX server
4.  voila.. done.. all Notes/AIX restores/tapes are now restorable.

Specifically I don't believe it's necessary to export all the client data
but you only need to export the TSM server itself right? In other words you
cannot restore a TSM DB tape created from the MVS environment to an AIX
environment. You actually have to export the server to move between
environments in this manner. And you don't need to do it for all the client
data, though I suppose you could.. Is that correct?


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM and DR

2003-09-19 Thread Gerald Wichmann
Say for a moment you're faced with recovering a TSM server in a DR
situation. You have your DB backup and copypool tapes and perform a database
recovery. If that DB was created back in January and it's now March, isn't
there a potential for objects getting expired the first time you start the
TSM server? E.g. when the TSM server is started it typically performs an
expire inventory as part of that sequence. I would imagine that now that
it's 2 months later, would it therefore start expiring objects that you
probably don't want to have expired?

If not, why not?
If so, whats the appropriate step to take before starting the TSM server (or
perhaps even before recovering the DB) to ensure expire inventory doesn't
ruin your recovery? I recall there being an option in dsmserv.opt that
allows you to turn off automatic expire inventory. That seems like a good
idea.. but what if there was an admin schedule that runs expire inventory
back then and you happen to start the recovery while in the schedule's
window?

I think you can see what I'm getting at with all this. I want to make sure
all my bases are covered..


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Re: TSM and DR

2003-09-19 Thread Gerald Wichmann
You guys are reading into the question WAY more then need be =). It was a
scenario question and all time lines were made up to drive home the point of
what I was trying to get at, and that's the potential for expire inventory
to expire data you maybe don't want expired. Even if your turn around for a
true disaster is supposedly 48 hours, I would imagine you'd still be
interested in what the question was getting at (if nothing else, just to be
aware of how TSM works if it was a concern).

Also it's a good point that you'd probably only be interested in ACTIVE
files when recovering your environment but none the less, humor me and
assume you're also interested in ensuring no INACTIVE version is lost. In
that case what can one do to ensure no data is expired AT ALL? The only
thing I can think of is to ensure expire inventory never runs. The
dsmserv.opt entry would help prevent that prior to recovering the database
but what about any admin schedules that might've been defined? Is it
possible to disable client and admin scheduling without first starting TSM?

Gerald

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]
Sent: Friday, September 19, 2003 10:59 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM and DR

I agree with prior comments that you should have tapes available for DR that
are more current than 3 months back.

However, in MOST cases, even if expiration runs it probably won't cause
problems; TSM is NEVER going to be expiring the ACTIVE files associated with
a node.  And in MOST cases for DR, you are trying to get your client
machines restored to the latest possible level.

Now where you get in trouble, is if you let the client run a new BACKUP,
before you have restored everything you need.  When the client runs, it will
flag any files not on the hard drive as being expired, and that may have
side effects you don't want.

So I would say in a DR situation you should turn off your client schedules.




-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED]
Sent: Friday, September 19, 2003 1:19 PM
To: [EMAIL PROTECTED]
Subject: TSM and DR


Say for a moment you're faced with recovering a TSM server in a DR
situation. You have your DB backup and copypool tapes and perform a database
recovery. If that DB was created back in January and it's now March, isn't
there a potential for objects getting expired the first time you start the
TSM server? E.g. when the TSM server is started it typically performs an
expire inventory as part of that sequence. I would imagine that now that
it's 2 months later, would it therefore start expiring objects that you
probably don't want to have expired?

If not, why not?
If so, whats the appropriate step to take before starting the TSM server (or
perhaps even before recovering the DB) to ensure expire inventory doesn't
ruin your recovery? I recall there being an option in dsmserv.opt that
allows you to turn off automatic expire inventory. That seems like a good
idea.. but what if there was an admin schedule that runs expire inventory
back then and you happen to start the recovery while in the schedule's
window?

I think you can see what I'm getting at with all this. I want to make sure
all my bases are covered..


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM Client Query

2003-09-12 Thread Gerald Wichmann
We frequently get tapes from other sources and are expected to restore them.
The problem I'm finding is the original environment sends me their TSM
database and it may have restores for a given server (say SERVERA) going
back a year (so lets say 365 backups that the TSM DB is aware of). Now they
only send me 200 tapes and I'm faced with figuring out which of those
backups that the DB knows about for SERVERA do I *actually have*? I realize
this is probably a perfect candidate for a select query or macro where I
pass in a list of volumes and it spits out what backup dates it knows about.
Something I'll have to figure out how to determine.

Similarly I'm curious though whether this is doable with Agent backups
(specifically Exchange)..

Anyone develop something like this so I don't have to reinvent the wheel?
Any thoughts? Appreciate the help lately.

Thanks!
Gerald



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM 4.2 and Exchange TDP 1.1.1

2003-09-11 Thread Gerald Wichmann
I was just looking at my Exchange TDP output and have it showing all
exchange backups ever created. It shows directory backups and information
store backups (full and incremental. exchange 5.5). Is there someway I can
query our output this list of backups to a text file? On the TDP side I
don't see anything in the GUI. I'm guessing I need to do something with the
TDP cmdline to query a list of this (with dates) so I can hand this to
someone and get an idea of what to restore. Time to read some notes but if
someone knows off the top of their head how to do this I'd appreciate the
info.

Gerald



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM Coredumps when started..

2003-09-10 Thread Gerald Wichmann
TSM is core dumping on my test server. It looks to me like it ran out of log
space but just wanted to confirm that as it's been a while since I've had
this happen. How do I proceed to restart my TSM server at this point?

Tivoli Storage Manager for AIX-RS/6000
Version 4, Release 2, Level 4.0

Licensed Materials - Property of IBM

5698-TSM (C) Copyright IBM Corporation 1999,2001. All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR0900I Processing options file dsmserv.opt.
ANR0921I Tracing is now active to file /tsm/trace.out.
ANR0990I Server restart-recovery in progress.
ANR0200I Recovery log assigned capacity is 5120 megabytes.
ANR0201I Database assigned capacity is 2 megabytes.
ANR0306I Recovery log volume mount in progress.
ANR0353I Recovery log analysis pass in progress.
ANR0354I Recovery log redo pass in progress.
ANR0355I Recovery log undo pass in progress.
ANRD logseg.c(498): ThreadId4 Log space has been overcommitted (no
empty segment found) - base LSN = 525679.0.0.
ANR7838S Server operation terminated.
ANR7837S Internal error LOGSEG871 detected.
  0x1008E48C LogAllocSegment
  0x1008B2E0 ForceLogPages
  0x1008BCE4 LogWriterThread
  0x10006DC4 StartThread
  0xD00080CC _pthread_body
ANR7833S Server thread 1 terminated in response to program abort.
ANR7833S Server thread 2 terminated in response to program abort.
ANR7833S Server thread 3 terminated in response to program abort.
ANR7833S Server thread 4 terminated in response to program abort.
ANR7833S Server thread 5 terminated in response to program abort.
ANR7833S Server thread 6 terminated in response to program abort.
ANR7833S Server thread 7 terminated in response to program abort.
ANR7833S Server thread 8 terminated in response to program abort.
IOT/Abort trap(coredump)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


LTO throughput - real world experiences

2003-07-08 Thread Gerald Wichmann
I'm curious as to what kind of MB/sec throughput people are seeing with TSM
and LTO drives. I realize your mileage may vary and that it depends on your
configuration but lets face it most of us have somewhat similar
environments. A disk pool and tape pool. A mixture of file system data as
well as database data. How many MB/sec does a migration process produce in
your environment? Does anyone have any DB's streaming directly to LTO and
some figures? Appreciate any feedback

Gerald Wichmann
Senior Systems Development Engineer
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Recovering TSM 4.2.0 Server

2003-07-07 Thread Gerald Wichmann
AIX 4.3.3, TSM 4.2.1 installed and running. DB backup tape is supposedly
4.2.0. Finally made some progress and the DB recovery started to work and
then failed. I'm not sure what to make of it.. Anyone have any suggestions?

[EMAIL PROTECTED]:/usr/tivoli/tsm/server/bin# dsmserv restore db
volumenames=SU3689 devclass=tapeclass
ANR7800I DSMSERV generated at 12:27:22 on Aug 29 2001.

Tivoli Storage Manager for AIX-RS/6000
Version 4, Release 2, Level 1.0

Licensed Materials - Property of IBM

5698-TSM (C) Copyright IBM Corporation 1999,2001. All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR0900I Processing options file dsmserv.opt.
ANR8200I TCP/IP driver ready for connection with clients on port 1500.
ANR0200I Recovery log assigned capacity is 5120 megabytes.
ANR0201I Database assigned capacity is 4 megabytes.
ANR4621I Database backup device class TAPECLASS.
ANR4622I   Volume 1: SU3689.
ANR4632I Starting point-in-time database restore (no commit).
ANR8326I 001: Mount GENERICTAPE volume SU3689 R/O in drive LTO1 (/dev/rmt0)
of
library MANUAL within 60 minutes.
ANR8335I 001: Verifying label of GENERICTAPE volume SU3689 in drive LTO1
(/dev/rmt0).
ANR8328I 001: GENERICTAPE volume SU3689 mounted in drive LTO1 (/dev/rmt0).
ANRD pvrgts.c(4059): ThreadId9 Invalid block header read from volume
SU3689.
(magic=5A4D, ver=20048, Hdr blk=5 expected 0, db=0
262144,262144,0)ANRD icrest.c(2076): ThreadId0 Rc=30 reading header
record.
ANR2032E RESTORE DB: Command failed - internal server error detected.
ANR8468I GENERICTAPE volume SU3689 dismounted from drive LTO1 (/dev/rmt0) in
library MANUAL.
[EMAIL PROTECTED]:/usr/tivoli/tsm/server/bin#



Gerald Wichmann
Senior Systems Development Engineer
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Recovering a TSM 4.2.1 server

2003-07-03 Thread Gerald Wichmann
I've been given an LTO tape that is supposedly a TSM DB backup. The external
barcode label reads SU3689L1, which I assume is also the volume name as TSM
would see it. My intention is to recover the TSM database by doing a dsmserv
restore db. I do not have the volume history file or anything else from the
original server, so I created my own devconfig file and have a single LTO
tape drive connected and defined. Judging by the admin guide, the next step
I need to do is a dsmserv display dbbackupvolume devclass=tapeclass
vol=SU3689L1, however it comes back and errors out..

ANRD assd.c(1040): ThreadId0 Unexpected result code (15) from
pvrAcquireMountPoint.
ANRD icstream.c(1606): ThreadId0 Error 87 opening input stream.
ANR2032E DISPLAY DBBACKUPVOLUMES: Command failed - internal server error
detected.

I guess my primary question is whether some of my assumptions and techniques
are correct or not? Particularly on the volume name as I realize the barcode
doesn't necessarily equal the digital tape volume name stored on the tape.
Still is that a good assumption? Is it case sensitive? Am I doing something
obviously wrong?

For now I'm going to verify the tape drive works by putting in a blank tape
and attempting to write some data to it using TSM. If that works I'll return
to trying to recover the DB. Appreciate any help/feedback.

Gerald Wichmann
Senior Systems Development Engineer
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM 4.2.1 and a Manual drive

2003-07-03 Thread Gerald Wichmann
I've been playing with my single external LTO drive and having problems with
the good ole insufficient mount points available error. I know getting TSM
to work with a manual drive is quirky and am probably missing something but
I'm able to label a cartridge fine using label libv so I assume the drive is
defined properly and working. However whenever I try a migration or run a
backup that forces a migration I get the insufficient mount points error.

My devclass has DRIVES for mount limit.. I've also tried 1 there but it
didn't help. My node in question has maxnump set to 1 which should be fine
too. Looking for other suggestions if anyone has any.. I searched the adsm
list archive and someone had mentioned doing a reply and that I had to
mount the tape but that doesn't seem to be the case here. It never puts for
a request for me to reply to. I've tried it both with the tape in the drive
and without it in the drive. The tape has been labeled successfully but
nothing shows up under query libv or query vol. Far as I know you don't
have to check in a manual tape do you? Maybe that's my problem..

Gerald Wichmann
Senior Systems Development Engineer
ZANTAZ, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM AIX compatibility

2003-06-05 Thread Gerald Wichmann
I'm just looking at some of the new IBM pSeries servers based on the power4
architecture. It's my understanding that they only run AIX 5L and not AIX
4.3.. That has me wondering about potential DR related problems with TSM.
Say my environment is currently TSM 4.2 on AIX 4.3.3 and I lose everything
in a disaster except my offsite tapes. Lets say the only replacement server
I can acquire is a new pSeries that only runs AIX 5L. Perhaps I even can
only get a newer version of TSM (5.1 or 5.2). Are there any compatibility
issues in recovering a TSM DB snapshot backup to a newer version of TSM
running on a newer version of AIX?

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Re: TSM AIX compatibility

2003-06-05 Thread Gerald Wichmann
True perhaps I could but you must see what I'm getting at.. would I want to?
At that point I'd personally prefer to buy a pSeries based on the power4
architecture over power3 if I'm only looking at new servers. They're
significantly more powerful and yet similar in price (sometimes cheaper).

I must admit I have an ulterior motive to asking the question. I am
anticipating having the need to recover customer data in the future, based
on obtaining a copy of their TSM database and tapes. If I purchase a server
for this purpose and it utilizes the power4 architecture, I'm not going to
be able to process any customer's data based on TSM 4.2 on AIX 4.3. Probably
in my case it makes more sense to buy a power3 architecture server to be
more versatile.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Ochs, Duane [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 04, 2003 9:45 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM  AIX compatibility

Realistically you would be able to find a machine that could run 4.3.3 and
you would be able to find TSM 4.2.

But to answer your questions:

1) Can I restore an older version DB snapshot to TSM 5.1.
 No you can not.

Duane

-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 04, 2003 11:36 AM
To: [EMAIL PROTECTED]
Subject: TSM  AIX compatibility


I'm just looking at some of the new IBM pSeries servers based on the power4
architecture. It's my understanding that they only run AIX 5L and not AIX
4.3.. That has me wondering about potential DR related problems with TSM.
Say my environment is currently TSM 4.2 on AIX 4.3.3 and I lose everything
in a disaster except my offsite tapes. Lets say the only replacement server
I can acquire is a new pSeries that only runs AIX 5L. Perhaps I even can
only get a newer version of TSM (5.1 or 5.2). Are there any compatibility
issues in recovering a TSM DB snapshot backup to a newer version of TSM
running on a newer version of AIX?

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)


.  Thank you.


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


recovring a TSM server

2003-06-04 Thread Gerald Wichmann
Is it necessary to know the size of the original database and log for your
TSM server when doing a recovery or is it sufficient to just make it as big
or larger then the original before issuing the dsmserv recover db command?

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


Restoring Domino Agent data to a different platform

2003-05-30 Thread Gerald Wichmann
Assuming someone were to give me tapes created using TDP for Lotus Domino
v1.1.1.0. Notes was running on AIX 4.3.3 being backed up to TSM 4.2.0
running on AIX 4.3.3.. I also get a copy of the TSM DB and recover from it.
When it comes time to restore that data would I need the target machine to
be a duplicate of the original (i.e. must be notes on AIX) or could it be an
alternative platform like Notes on Win2k (still using the TDP notes agent)?

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


TSM vs Veritas?

2003-03-04 Thread Gerald Wichmann
I'm looking for a good recent comparison of TSM vs Veritas Netbackup...
Appreciate any links/suggestions on where to find a comparison.

Thanks,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.


HSM

2002-12-11 Thread Gerald Wichmann
Anyone have a particularly large HSM environment and been using it a while?
I'm curious on any experiences with the product, good or bad. I would
imagine there must be some people out there who need long term storage of
infrequently accessed files and have considered it as a solution for that.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.



TSM Client Software Fixes URL/FTP site

2002-12-05 Thread Gerald Wichmann
Whats the URL or FTP site these days to download client fixes? I've been
jumping around the new ibm site and it made me register, but hasn't provided
me with a password (says within 3 days). So I'm curious if theres a direct
URL I can get to it with.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.



How to tell which tapes a restore requires?

2002-11-26 Thread Gerald Wichmann
Let me ask my question a different way. If I do a restore, the server
requests certain tapes to be mounted if they are offline and not physically
in the library. Is there any way to determine ahead of time what those tapes
are and put them in the library via a preview or something or do I always
have to run the restore and wait to see what tapes it wants?


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.



Querying which volumes have a particular file or node

2002-11-25 Thread Gerald Wichmann
Given a file with some filenames in it, does someone have a macro/command I
can run against the filenames in that file to determine what volumes those
files are on? I.e. so I can identify which offline volumes I need to put in
my library for a restore before actually running the restore command.

Similarly, given a node, how do I determine what volumes the node has data
backed up to.

I believe both these questions involve a select statement and have been
discussed before so someone should have them handy.

Thanks,
Gerald


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.



DB2 Backups on Solaris 8 Wrong documentation?

2002-11-11 Thread Gerald Wichmann
According to the Backing up DB2 PDF for backing up DB2 on Solaris, on page
92 step 2 it says:

2. The /etc/system must have, as a minimum, the following values. Modify
them as necessary, then reboot.
set lwp_default_stksize = 0x4000
set rpcmod:svc_run_stksize = 0x4000
set semsys:seminfo_semmap = 50
set semsys:seminfo_semmni = 50
set semsys:seminfo_semmns = 300
set semsys:seminfo_semmnu = 150
set semsys:seminfo_semopm = 50
set semsys:seminfo_semume = 50
set semsys:seminfo_semmsl = 125

The above svc_run_stksize parameter throws an error during Solaris boot that
says something to the effect of it being an invalid parameter. Is this a
typo in the book? Should it actually be svc_default_stksize?

Looking at - http://docs.sun.com/db/doc/806-6779/6jfmsfr7h?q=set+rpcmod
http://docs.sun.com/db/doc/806-6779/6jfmsfr7h?q=set+rpcmoda=view a=view

I don't see an svc_run_stksize

Anyone?  IBM??

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.



Image Backups

2002-09-27 Thread Gerald Wichmann

-How does an online image backup work exactly in regards to open files?
According to the docs, Corruption of the backup may occur if applications
write to the volume while the backup is in progress. In this case, run fsck
after a restore. So say this occurs.. What do you end up with backed up in
the case of that file that was written during the backup? The before version
or the after version? Or some hybrid fuzzy file after the restore? I don't
entirely see what good running fsck is going to be on a file that the bytes
changed in the middle of backing it up. Isn't the image just backing up at
the bit level and doesn't really pay attention to open files or changing
bytes? I guess I'd just like to hear more info on how an online image
backup would work.

-under offline and online image backup it also says For linux86 only: TSM
performs an online image backup of file systems residing on a logical volume
created by the linux logical volume manager during which the volume is
available to other system applications. Can I infer that if I am just using
reiserfs that I cannot then do an online image backup - only an offline via
the device itself?

-I find it interesting that you can do an incremental image backup and not
entirely sure how that works either. Isn't an image backup just backing up
the filesystem bit by bit? How does it know which bits changed or which
files? If I read this correctly, it only works with lvm's. It says Two
types of backup only apply to logical volumes: mode=selective (the default)
and mode=incremental.. So in the case of linux, if you're using reiserfs
I'm guessing you can't do an incremental image backup. You'd have to use
ext3 or something similar.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.



Re: TSM with library 1 drive

2002-09-25 Thread Gerald Wichmann

1. Nothing wrong with it per say.. It works great. You'll have to allocate
some disk space for your reclamation pool but in my experience it works
well. Also there are the obvious limitations that having only 1 drive
presents such as when someone requests a restore, if their data is on tape
and another user is already doing a restore they'll have to wait for that
drive to free up. Also similar potential limitations in backup or migration
of data. No redundancy so if the drive dies you're out of luck until it's
fixed or replaced (can't backup or restore any data to/from tape). The same
things you'd bring up if you were considering 2 drives vs 3.. or 4.. The
NEED for 2 drives over one is mostly a throughput, redundancy, and
availability issue.
2. nothing wrong with an autoloader.. works just like a tape library more or
less.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Ofer vaknin [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 25, 2002 6:34 AM
To: [EMAIL PROTECTED]
Subject: TSM with library  1 drive
Importance: High

Hi all

1. do anybody know of any problems that TSM can not be install with IBM
library with 1 drive only, and must need 2 drives ?
2. is TSM can be install with IBM autoloader ,and how will it function ?

Thanks In Advance
Ofer Vaknin


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm)
service.  For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.



Re: Temporary Backup

2002-07-17 Thread Gerald Wichmann

By creating a new management class with the appropriate retention settings
for the domain that node is assigned to, then on that node add an include
statement appropriately pointing to that managementclass..

e.g.

include /somefilesystem/.../* newmanagementclassname
include /somefilesystem/* newmanagementclassname

everything for that filesystem will use the retention settings of the new
mgmtclass.
Everything else will use the default management class.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Coats, Jack [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 17, 2002 12:51 PM
To: [EMAIL PROTECTED]
Subject: Temporary Backup

I have one server that I need to backup everything except on file system on
like I do all my others.

Then I need to backup this ONE file system with a different retention than
everything else.
(like 10 days, where everything else is kept for 60 after it is deleted).

How should I set this up?

... TIA ... Jack


This e-mail has been captured and archived by the ZANTAZ Digital Safe(tm).
For more information, visit us at www.zantaz.com.
IMPORTANT: This electronic mail message is intended only for the use of the
individual or entity to which it is addressed and may contain information
that is privileged, confidential or exempt from disclosure under applicable
law.  If the reader of this message is not the intended recipient, or the
employee or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify the sender immediately by
telephone or directly reply to the original message(s) sent.  Thank you.



Re: solaris experts plz help

2002-07-11 Thread Gerald Wichmann

Yes that's actually a typo (the lb in places where it should be op or
mt). My conf files are correct in that sense (i.e. I wasn't
cutting/pasting them). The problem I'm having is I'm not sure how to
interpret what LUN a device is on or what the TARGET is by looking at the
output from probe-scsi-all..

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Jozef Zatko [mailto:[EMAIL PROTECTED]]
Sent: Thursday, July 11, 2002 1:23 AM
To: [EMAIL PROTECTED]
Subject: Re: solaris experts plz help

Hello Gerald,
you have to change your conf files like this:

lb.conf
if the scsi target and luns for your libraries are OK, you do not have to
change this file.

op.conf
replace all lb with op (in this file you define only optical drives,
not changer). For each optical drive create one stanza with correct target
and lun of each drive (so you will have 10 stanzas in your conf file).

Example

Name=op class=scsi
 Target=X lun=Y;

X is SCSI target ID of optical drive and Y is LUN of that drive

mt.conf
here replace lb with mt (in this file you define only tape drives, not
changer). For each tape drive create one stanza with correct target and
lun of each drive (so you will have 4 stanzas in your conf file).

Example

Name=mt class=scsi
 Target=X lun=Y;

X is SCSI target ID of tape drive and Y is LUN of that drive

Ing. Jozef Zatko
Login a.s.
Dlha 2, Stupava
tel.: (421) (2) 60252618




Gerald Wichmann [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
11.07.2002 01:14
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:solaris experts plz help


Below is the output of my server's probe-scsi-all.. I have 2 ATL M1500 LTO
libraries with 2 drives each, and one HP 1200ex optical library with 10
optical drives. I need to populate the mt.conf, lb.conf, and op.conf
files accordingly but what I've done doesn't seem to be working. When
doing
the add_drv command, it loads the device driver but then fails to attach.
When I do add_drv for op it works but I only get a 0op and 1op so
I'm
not sure why it didn't pick up 10 drives.

So my first question is, what do I put in the various conf files? I'm not
as
familiar with solaris so could use some help. Currently my files look like
this:

Lb.conf

  Name=lb class=scsi
Target=5 lun=0;
  Name=lb class=scsi
Target=4 lun=0;
  Name=lb class=scsi
Target=4 lun=1;

Op.conf

  Name=lb class=scsi
Target=4 lun=0;
  Name=lb class=scsi
Target=4 lun=1;

Mt.conf

  Name=lb class=scsi
Target=5 lun=0;
  Name=lb class=scsi
Target=4 lun=0;

Probe-scsi-all:

/pci@1f,2000/pci@1/scsi@5
Target 0
  Unit 0   Removable Medium changerM4 DATA MagFile 2.10
Target 1
  Unit 0   Removable Tape HP  Ultrium 1-SCSI  E15V
Target 2
  Unit 0   Removable Tape HP  Ultrium 1-SCSI  E15V

/pci@1f,2000/pci@1/scsi@4
Target 0
  Unit 0   Removable Medium changerM4 DATA MagFile 2.10
Target 1
  Unit 0   Removable Tape HP  Ultrium 1-SCSI  E15V
Target 2
  Unit 0   Removable Tape HP  Ultrium 1-SCSI  E15V

/pci@1f,4000/scsi@4,1
Target 2
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 3
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 4
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 5
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 6
  Unit 0   Removable Device type 8 HP  C1107J  1.40

/pci@1f,4000/scsi@4
Target 1
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 2
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 3
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 4
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 5
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 6
  Unit 0   Removable Device type 7 HP  C1113J  1.10



Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



solaris expert needed

2002-07-11 Thread Gerald Wichmann

Well I'm making progress but something is clearly still wrong. I started a
call with Tivoli but I'm waiting to hear back from them. If anyone see's
anything obvious that I'm doing wrong let me know as I'm not very good with
solaris specific device driver configuration:


bash-2.03# cat lb.conf | grep -v #
name=lb class=scsi
target=0 lun=0;
name=lb class=scsi
target=6 lun=1;
bash-2.03# cat mt.conf | grep -v #
   name=mt class=scsi
   target=1 lun=0;
   name=mt class=scsi
   target=2 lun=0;
bash-2.03# cat op.conf | grep -v #
name=op class=scsi
target=1 lun=0;
name=op class=scsi
target=2 lun=0;
name=op class=scsi
target=3 lun=0;
name=op class=scsi
target=4 lun=0;
name=op class=scsi
target=5 lun=0;
name=op class=scsi
target=6 lun=0;

name=op class=scsi
target=2 lun=1;
name=op class=scsi
target=3 lun=1;
name=op class=scsi
target=4 lun=1;
name=op class=scsi
target=5 lun=1;
bash-2.03# /usr/sbin/add_drv lb
devfsadm: driver failed to attach: lb
Warning: Driver (lb) successfully added to system but failed to attach
bash-2.03# /usr/sbin/add_drv mt
devfsadm: driver failed to attach: mt
Warning: Driver (mt) successfully added to system but failed to attach
bash-2.03# /usr/sbin/add_drv op
bash-2.03# ls /dev/rmt
0op   0opt  1op   1opt  2op   2opt  3op   3opt  4op   4opt  5op   5opt  6op
6opt  7op   7opt  8op   8opt  9op   9opt
bash-2.03#



Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



solaris experts plz help

2002-07-10 Thread Gerald Wichmann

Below is the output of my server's probe-scsi-all.. I have 2 ATL M1500 LTO
libraries with 2 drives each, and one HP 1200ex optical library with 10
optical drives. I need to populate the mt.conf, lb.conf, and op.conf
files accordingly but what I've done doesn't seem to be working. When doing
the add_drv command, it loads the device driver but then fails to attach.
When I do add_drv for op it works but I only get a 0op and 1op so I'm
not sure why it didn't pick up 10 drives.

So my first question is, what do I put in the various conf files? I'm not as
familiar with solaris so could use some help. Currently my files look like
this:

Lb.conf

  Name=lb class=scsi
Target=5 lun=0;
  Name=lb class=scsi
Target=4 lun=0;
  Name=lb class=scsi
Target=4 lun=1;

Op.conf

  Name=lb class=scsi
Target=4 lun=0;
  Name=lb class=scsi
Target=4 lun=1;

Mt.conf

  Name=lb class=scsi
Target=5 lun=0;
  Name=lb class=scsi
Target=4 lun=0;

Probe-scsi-all:

/pci@1f,2000/pci@1/scsi@5
Target 0
  Unit 0   Removable Medium changerM4 DATA MagFile 2.10
Target 1
  Unit 0   Removable Tape HP  Ultrium 1-SCSI  E15V
Target 2
  Unit 0   Removable Tape HP  Ultrium 1-SCSI  E15V

/pci@1f,2000/pci@1/scsi@4
Target 0
  Unit 0   Removable Medium changerM4 DATA MagFile 2.10
Target 1
  Unit 0   Removable Tape HP  Ultrium 1-SCSI  E15V
Target 2
  Unit 0   Removable Tape HP  Ultrium 1-SCSI  E15V

/pci@1f,4000/scsi@4,1
Target 2
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 3
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 4
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 5
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 6
  Unit 0   Removable Device type 8 HP  C1107J  1.40

/pci@1f,4000/scsi@4
Target 1
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 2
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 3
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 4
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 5
  Unit 0   Removable Device type 7 HP  C1113J  1.10
Target 6
  Unit 0   Removable Device type 7 HP  C1113J  1.10



Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: TSM storage problem

2002-07-10 Thread Gerald Wichmann

I would imagine it should be fine installing it afterwards.. It should just
dump various *.lic files into the TSM server directory. The license files
are only available off the base cd.. i.e. 5.1.0, or 4.2.0, or whichever
version you have. If you download an update like 5.1.1, it doesn't include
the license files. That's why when you do a fresh install you always start
with the base level off the cd, then upgrade it with the update. So you get
the license files installed.

I'd first check if there are *.lic files in the directory and try licensing
it with the command below. If that doesn't work it probably wasn't
installed. If that doesn't work then next I would halt the TSM server (type
halt at TSM admin prompt or if you're using windows, stop the TSM server
service), install the license files, and then start the TSM server again (in
windows, the TSM server service). Then try registering it as below.

IF you're uncomfortable doing it you could always call Tivoli support and
have them walk you through it but it's pretty painless. Doubt you'd hurt
anything as TSM is pretty resilient.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Simeon Johnston [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 10, 2002 2:28 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM storage problem

That's what I had thought also.  The people who set it up (set it up for
us while training us etc) said it didn't matter either.
If the licensing package was not installed can it be installed
afterwards?  There is an option to install the licensing package
(something like that.  It's not too clear on exactly what it's for) but
I don't want to screw up what I already have setup.

Even the sales rep that sold it to us said the licensing wasn't necessary.
Funky...

sim

Gerald Wichmann wrote:

To my knowledge, licensing doesn't matter. I have had many servers in the
past run for quite some time before I bothered licensing them and a server
here that continues to run without registering the licenses. It gripes
about
being uncompliant in the activity log however no functionality has been
disabled. It will be interesting if indeed that's the problem as either I
was wrong or Tivoli changed it and made licensing matter. The thing with
licensing is all the license files are on the base cd so it doesn't really
know whether you paid for them or not. It would be easy for you to check if
this is your problem assuming you installed the license package/lpp's
simply
by doing a reg lic file=mgsyslan.lic number=1  or however many you want
to
license for the number parameter. Then do a q lic to see if it's
compliant.

But it does indeed look like that's your problem based on the definition of
that ANR.. looks like you found your problem.





Re: TSM storage problem

2002-07-09 Thread Gerald Wichmann

Do a q vol f=d and check the Access field on your disk volumes. They
should all be ReadWrite. If they are ReadOnly then this would explain
why TSM isn't writing to them and telling you the storage pool is full even
though at first glance it appears it isn't.

TSM can change the access automatically to ReadOnly or even something else
if an event occurs the warrents it. E.g. if TSM loses access to the
filesystem It's likely to mark all the volumes on that filesystem to
unavailable. There are also cases where it'll change it to some of the
other values. To see why it did so you have to scan through the activity log
(e.g. do a q act begind=today-7 search=sp1 or something similar).

The other possibility is the max size threshold which someone else
mentioned. Also, although remote, check the stgpool access parameter (q stg
f=d) to ensure it's readwrite.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Simeon Johnston [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 09, 2002 11:36 AM
To: [EMAIL PROTECTED]
Subject: TSM storage problem

We just got a TSM server up and running (still in a testing phase but
it's running).
I recently tried to backup a new node I created and got this error -
ANS1329S Server out of data storage space

Now, the server only has an 80GB partition for data storage at the
moment.  That is split into 7 10GB files and one 4GB.
The 4GB is 95% full but the others are around 30 - 40% full.

Storage pool info --

STORAGE   DISK 76,480.0  41.7  41.7   90  70

X:\SERVER1\SP1   STORAGE   DISK10,240.0
34.2 On-Line
X:\SERVER1\SP2   STORAGE   DISK10,240.0
41.0 On-Line
X:\SERVER1\SP3   STORAGE   DISK10,240.0
37.8 On-Line
X:\SERVER1\SP4   STORAGE   DISK10,240.0
34.0 On-Line
X:\SERVER1\SP5   STORAGE   DISK10,240.0
31.4 On-Line
X:\SERVER1\SP6   STORAGE   DISK10,240.0
39.0 On-Line
X:\SERVER1\SP7   STORAGE   DISK10,240.0
49.1 On-Line
X:\SERVER1\SP8   STORAGE   DISK 4,800.0
96.4 On-Line

Why would I get this error if there is more than enough room for a backup?
I'm new to TSM administration so maybe I'm missing something.

sim
And keep in mind it could be something REALLY stupid that I just forgot
about.



Re: TSM storage problem

2002-07-09 Thread Gerald Wichmann

No that just shows you what the highest percentage utilized was since the
last time you did a reset command on that statistic.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Simeon Johnston [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 09, 2002 2:36 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM storage problem

Does the Max Pct Util have anything to do with it?  Is this a max
allowed setting?

   Available Space (MB): 5,120
 Assigned Capacity (MB): 5,020
 Maximum Extension (MB): 100
 Maximum Reduction (MB): 5,016
  Page Size (bytes): 4,096
 Total Usable Pages: 1,284,608
 Used Pages: 233
   Pct Util: 0.0
  Max. Pct Util: 0.0
   Physical Volumes: 2
 Log Pool Pages: 512
 Log Pool Pct. Util: 0.89
 Log Pool Pct. Wait: 0.00
Cumulative Consumption (MB): 904.78
Consumption Reset Date/Time: 05/01/2002 22:42:01

sim

Tim Brown wrote:

look at your recovery logs





Re: TSM storage problem

2002-07-09 Thread Gerald Wichmann

No that's not it.. it would work regardless.

Doublecheck what domain the node is assigned to and verify it is backing up
with the management class and therefore stgpool you are thinking it should
be.

I'd recommend looking through your activity log on or about where the
problem started and see if TSM logged anything relevant. You can find out
when the first problem occurred by doing a q act begind=today-7
search=full or look for the AN msg ID you're getting about the storage pool
being full. Then when you see the time/date stamp, do a q act
begind=today-? Begint=??:?? and rear around that timeframe to see what
other messages there are. I'm sure something happened somewhere if it was
working and then stopped.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Simeon Johnston [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 09, 2002 1:22 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM storage problem

Did all that.  Everything looks fine.  It's all Read/Write.
Here's a silly question.  Would this problem occure is, say, the license
check failed?
I just noticed that it says

Server License Compliance: FAILED

Is this the problem?  I'd think this would be a different error.

sim


Gerald Wichmann wrote:

Do a q vol f=d and check the Access field on your disk volumes. They
should all be ReadWrite. If they are ReadOnly then this would explain
why TSM isn't writing to them and telling you the storage pool is full even
though at first glance it appears it isn't.

TSM can change the access automatically to ReadOnly or even something
else
if an event occurs the warrents it. E.g. if TSM loses access to the
filesystem It's likely to mark all the volumes on that filesystem to
unavailable. There are also cases where it'll change it to some of the
other values. To see why it did so you have to scan through the activity
log
(e.g. do a q act begind=today-7 search=sp1 or something similar).

The other possibility is the max size threshold which someone else
mentioned. Also, although remote, check the stgpool access parameter (q stg
f=d) to ensure it's readwrite.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Simeon Johnston [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 09, 2002 11:36 AM
To: [EMAIL PROTECTED]
Subject: TSM storage problem

We just got a TSM server up and running (still in a testing phase but
it's running).
I recently tried to backup a new node I created and got this error -
ANS1329S Server out of data storage space

Now, the server only has an 80GB partition for data storage at the
moment.  That is split into 7 10GB files and one 4GB.
The 4GB is 95% full but the others are around 30 - 40% full.

Storage pool info --

STORAGE   DISK 76,480.0  41.7  41.7   90
70

X:\SERVER1\SP1   STORAGE   DISK10,240.0
34.2 On-Line
X:\SERVER1\SP2   STORAGE   DISK10,240.0
41.0 On-Line
X:\SERVER1\SP3   STORAGE   DISK10,240.0
37.8 On-Line
X:\SERVER1\SP4   STORAGE   DISK10,240.0
34.0 On-Line
X:\SERVER1\SP5   STORAGE   DISK10,240.0
31.4 On-Line
X:\SERVER1\SP6   STORAGE   DISK10,240.0
39.0 On-Line
X:\SERVER1\SP7   STORAGE   DISK10,240.0
49.1 On-Line
X:\SERVER1\SP8   STORAGE   DISK 4,800.0
96.4 On-Line

Why would I get this error if there is more than enough room for a backup?
I'm new to TSM administration so maybe I'm missing something.

sim
And keep in mind it could be something REALLY stupid that I just forgot
about.





TSM Server Recovery

2002-07-03 Thread Gerald Wichmann

If you were given a TSM DB backup created from one platform, is it possible
to restore to another platform?

e.g. TSM 4.2.1 running on Solaris 2.8..

could you take the DB backup of that and restore to:

TSM 4.2.1 running on win2k?

If not is there a way to move a TSM server from one platform to another?
I.e. via export server?



Full Disk Pool Bug PART 2

2002-06-24 Thread Gerald Wichmann

Nevermind I found the problem. Funny how that always works. Soon as you ask
the question you think of another idea and sure enough find the problem.
Some of the disk pool volumes were read-only due to a problem the server
experienced earlier. Soon as I realized that I just updated them to
read-write and voila.. off it goes.

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Full Disk Pool Bug?

2002-06-24 Thread Gerald Wichmann

My clients on a test backup server (4.2.1) seem to think the disk pool is
full and go right to the tape pool. So I took out the tape pool as the next
stgpool for the diskpool and now the backups fail altogether. Apparently
they believe the disk pool has no space. Clearly though, it does.. Whats
going on here? Is this a known bug? I've tried halting and restarting the
server but that didn't help. Looking for suggestions..??



ANR0525W Transaction failed for session 4 for node CYGNUS-PRI-10.0.104.1
(Linux86) - storage media inaccessible.
ANR0522W Transaction failed for session 4 for node CYGNUS-PRI-10.0.104.1
(Linux86) - no space available in storage pool BACKUPPOOL and all successor
pools.
ANR0522W Transaction failed for session 4 for node CYGNUS-PRI-10.0.104.1
(Linux86) - no space available in storage pool BACKUPPOOL and all successor
pools.
ANR0522W Transaction failed for session 4 for node CYGNUS-PRI-10.0.104.1
(Linux86) - no space available in storage pool BACKUPPOOL and all successor
pools.
ANR0522W Transaction failed for session 4 for node CYGNUS-PRI-10.0.104.1
(Linux86) - no space available in storage pool BACKUPPOOL and all successor
pools.
ANR0522W Transaction failed for session 4 for node CYGNUS-PRI-10.0.104.1
(Linux86) - no space available in storage pool BACKUPPOOL and all successor
pools.
ANR0403I Session 4 ended for node CYGNUS-PRI-10.0.104.1 (Linux86).


tsm: SERVER1q stg

Storage  Device   EstimatedPctPct  High  Low  Next Stora-
Pool NameClass NameCapacity   Util   Migr   Mig  Mig  ge Pool
   (MB) Pct  Pct
---  --  --  -  -    ---  ---
ARCHIVEPOOL  DISK   1,030.00.50.590   70
BACKUPPOOL   DISK 122,891.0   33.3   33.390   40
SPACEMGPOOL  DISK   0.00.00.090   70
TAPEPOOL DC.LIB_TA-   436,090.3   30.9   60.090   70
  PE_1

tsm: SERVER1help ANR0522W

---


ANR0522W Transaction failed for session session number for node node name
(client platform) - no space available in storage pool pool name and all
successor pools.

Explanation: The server ends a database update transaction for the specified
session because the storage pool specified in the client's management class
copy group does not contain enough free space to hold the files sent from
the client. Successor storage pools to the one specified on the copy group
do not contain enough free space.

System Action: The specified session is ended and server operation
continues.

User Response: An authorized administrator can issue the DEFINE VOLUME
command to add storage to one or more storage pools in the storage
hierarchy. This action may also involve creating storage space by using an
operating system specific utility.



Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: Copygroup settings for Incremental and full backups

2002-06-18 Thread Gerald Wichmann

First of all, referring to full/incrementals in regards to TSM/ADSM doesn't
really apply (unless you're talking about backing up DB's and similar
applications). All backups are incremental. TSM simply examines your file
system and only backups up files that A-haven't been backed up yet, B-have
changed (newer version then a previously backed up file). So the very first
backup on a node will in essence be a full in some respect, but from there
on out they're all incrementals. How long the various files/directories are
retained is dependent on the domain the node is assigned to (and subsequent
hierarchy). On the other hand if you actually want a FULL backups and retain
it for a long time, you may find doing an archive to be more to your liking.

You need to look into how management classes work. In a given domain, you
have a hierarchy of domain-policy set-management class-copygroup. Each
domain has 1 policy set. Each policy set can have many management classes.
Each management class has 2 copygroups (a backup and an archive copygroup).
For a given domain that you create, you assign a default management class
which is basically the management class (and thereby copygroup parameters of
VERE,VERD,RETE,RETO) that all nodes use for their retention settings,
*provided you don't specify any other specific management classes*.

Basically what you need to do is for whatever domain your node is in, create
a new management class and then the 2 copygroups for that management class
that have the new retention settings you're after. Then for the node in your
dsm.sys (or dsm.opt in the case of windows) file, use include statements to
send whatever you wish to include to the specific management class. E.g.

Create new management class called SPECIAL.. then in dsm.sys/opt:

Include /usr/.../*
Include /tmp/.../*
Include /home/.../* SPECIAL

Whatever the default management class is would be the retention settings
for everything except any files that fall into my above include of include
/home/.../* SPECIAL which tells TSM to bind those files to that specific
management class.

Again it's all in the admin guide.. go look up and rear about the entire
domain/policy set/management class/copygroup hierarchy.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Yahya Ilyas [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 18, 2002 8:56 AM
To: [EMAIL PROTECTED]
Subject: Copygroup settings for Incremental and full backups

Is it possible to setup one filesystem of a node to keep backup for 365 days
and other filesystems' backups of that node to 35 days?

currently I have setup copygroup parameters of all domains  to 35
(verexists, verdeleted, retextra, retonly) for all incremental and full
backups.  Now on one ADSM client machine I need to have daily incremental
backups and keep it for 35 days, and on one of the filesystems of that node,
a full backup after every 30 days and keep that backup for 12 months.

I want to keep that full backup of that one filesystem for 12 months, and
backups of all other filesystems of that node to 35 versions and days.

I defined a separate domain, policy, mgmtclass, copygroup etc.  and
transferred that node to this new domain.  Defined an incremental schedule
and a Selective backup for the filesystem for monthly full backup.  But now
in this setting if I set VEREXISTS, VERDELETED, VEREXTRA, RETONLY of
copygroup to 365, it will keep all of the filesystems of that node to 365
days.  Is it possible of what I want to do with these backups?
Thanks
Yahya


   -
   Yahya Ilyas
   Systems Programmer Sr
   Systems Integration  Management
   Information Technology
   Arizona State University, Tempe, AZ 85287-0101

   [EMAIL PROTECTED]
   Phone: (480) 965-4467





Re: manual drive question!!!

2002-06-13 Thread Gerald Wichmann

For tape label operations I think you're out of luck as that's just the way
TSM does things.. For other operations you probably want to increase the
Mount Retention setting on your manual tape's device class.. (i.e. upd
devc devcname mountr=)

Do a help upd devc for more information on that parameter..default is 60
minutes.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Justin Bleistein [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 11, 2002 12:04 PM
To: [EMAIL PROTECTED]
Subject: manual drive question!!!

When I'm using manual drives and I label a tape it spits the tape out each
time when it's done labelling, this sucks because I'm not a the location of
where the hardware is. It's annoying calling someone all the time just to
pop the tape back in. Does anyone know of a parameter which keeps the tape
in the drive even after a successfully TSM tape label operation? Any help
would be appreciated thanks!

--Justin Richard Bleistein



Re: Busy log files

2002-06-13 Thread Gerald Wichmann

A simple way would be to exclude the log files from backup, and have a
preschedule command run that copies the log files someplace else or to
another filename and let the backup process back those up instead. E.g.
something like:

Dsm.sys:

Postschedulecmd cp /path/to/logs/* /path/to/logs/backup/

Exclude /path/to/logs/*


Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
650.625.0436 home
408.836.9062 cell


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Dan Foster
Sent: Thursday, June 13, 2002 5:06 AM
To: [EMAIL PROTECTED]
Subject: Busy log files

With ADSM 3.1 (no idea if this behavior has changed in TSM v4), when
we back up a busy UNIX application server, the app server is invariably
writing multiple log file entries to its app logs per second.

ADSM seems to be making an attempt to back up the files, sees it's
changed
mid-transmit to ADSM server, then retries the backup of the file, sees
it's changed yet again, and so forth until X number of attempts... then
it
gives up and moves on to the next file.

The implications of this is not good... that:

a) backups of busy app servers will stall on the logfiles

b) logfiles are least likely to ever get backed up if on a busy
server

Is there any way to deal with this in a reasonable way, short of
quiescing
the application? (Quiescing = downtime, which we can't quite do...)

-Dan Foster
IP Systems Engineering (IPSE)
Global Crossing Telecommunications



Re: Getting total amount of active versions.

2002-06-13 Thread Gerald Wichmann

I believe you'll find the select statement for doing that to be rather nasty
(i.e. takes a tremendous amount of time to the point of being impractical).
I don't recall the select statement but a quicker way to determine size of
active files is to do an export node * filedata=allactive preview=yes and
extract the relevant value from the activity log.

e.g. on a tsm server with 4 client nodes, roughly 153,000 files and only 4GB
of data it took 12 minutes to complete.. you can see the output below:

06/13/02   20:52:57  ANR0609I EXPORT NODE started as process 2.

06/13/02   21:05:07  ANR0986I Process 2 for EXPORT NODE running in the
  BACKGROUND processed 155674 items for a total of
  3,760,983,998 bytes with a completion state of SUCCESS
at
  21:05:07.


tsm: SERVER1q occ

Node Name  Type Filespace   FSID StorageNumber of  Physical   Logical
Name Pool Name  Files Space Space
   Occupied  Occupied
   (MB)  (MB)
--  -- - -- - - -
DB2Bkup /DS50  6 BACKUPPOOL 2 24.04 24.04
GWICHMAN.- Bkup \\gwichma- 1 BACKUPPOOL27,322  1,230.12  1,230.12
 NT  n1\c$
GWICHMAN.- Bkup \\gwichma- 2 BACKUPPOOL10,847662.53662.53
 NT  n1\d$
GWICHMAN.- Bkup /  1 BACKUPPOOL   115,576  2,129.38  2,129.38
 RH73
GWICHMAN.- Bkup /boot  2 BACKUPPOOL34  9.12  9.12
 RH73

tsm: SERVER1



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Emil S. Hansen [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 13, 2002 6:37 AM
To: [EMAIL PROTECTED]
Subject: Getting total amount of active versions.

Hello *SMers.

Is there a SELECT statement that can show me the (total) size of all active
files?

I have
SELECT SUM(LOGICAL_MB) AS Data_In_MB, SUM(NUM_FILES) \
AS Num_of_files FROM OCCUPANCY
for showing the total size of all files, but how about the size of
active files only? It will give me an idea about how much I will need to
restore in a disaster.
--
Best Regards
Emil S. Hansen - [EMAIL PROTECTED] - ESH14-DK
UNIX Administrator, Berlingske IT - www.bit.dk
PGP: 109375FA/ABEB 1EFA A764 529E 82B5  0943 AD3B 1FC2 1093 75FA

tjaa men ville disse lister ikke være kedelige hvis vi altid gav
hinanden ret med det samme :o) - Kim Schulz på SSLUG.PROG



Re: tsm 5.1???

2002-06-12 Thread Gerald Wichmann

Download the technical guide for TSM 5.1.. it covers both extensively as
well as other new features. Should be available on both www.redbooks.ibm.com
and www.tivoli.com

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Joseph Dawes [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 12, 2002 11:58 AM
To: [EMAIL PROTECTED]
Subject: tsm 5.1???
Importance: High

can anyone shed some light on the on the move nodedata command in 5.1 and
the simultaneous copy pool write feature?? or at least point me to some
good documentation??


Also is anyone using 5.1 and happy with it???

thanks for the input :)


Joe



Migration Problem Part 2

2002-06-07 Thread Gerald Wichmann
 segments found.
06/07/02   02:09:00  (33) Generating BF Copy Control Context Report:
06/07/02   02:09:00  (33)  No global copy control blocks.
06/07/02   02:09:00
06/07/02   02:09:00  (33) End Context report
06/07/02   02:09:00  (39) Context report
06/07/02   02:09:00  (39) DiskServerThread : ANRD calling thread
06/07/02   02:09:00  (39) Generating TM Context Report: (struct=tmTxnDesc)
  (slots=256)



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Migration Problem

2002-06-07 Thread Gerald Wichmann
).
User Name:
Date/Time First Data Sent: 06/06/02   23:56:43

  Sess Number: 12,163
 Comm. Method: Tcp/Ip
   Sess State: IdleW
Wait Time: 1.3 H
   Bytes Sent: 3.0 K
  Bytes Recvd: 1.0 K
Sess Type: Node
 Platform: Linux86
  Client Name: CYGNUS-PRI-10.0.204.1
  Media Access Status:
User Name:
Date/Time First Data Sent:

  Sess Number: 12,164
 Comm. Method: Tcp/Ip
   Sess State: MediaW
Wait Time: 1.3 H
   Bytes Sent: 384
  Bytes Recvd: 8.2 M
more...   (ENTER to continue, 'C' to cancel)

Sess Type: Node
 Platform: Linux86
  Client Name: CYGNUS-PRI-10.0.204.1
  Media Access Status: Waiting for mount point in device class
DC.LIB_TAPE_1 (4528 seconds).
User Name:
Date/Time First Data Sent: 06/07/02   00:29:11

  Sess Number: 12,188
 Comm. Method: Tcp/Ip
   Sess State: Run
Wait Time: 0 S
   Bytes Sent: 24.2 K
  Bytes Recvd: 769
Sess Type: Admin
 Platform: SUN SOLARIS
  Client Name: ADMIN
  Media Access Status:
User Name:
Date/Time First Data Sent:


tsm: SERVER1





Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: Make Compression Client faster ??

2002-06-07 Thread Gerald Wichmann

Does it start multiple sessions as expected during a backup? As someone else
already pointed out, this is tied to the # of filesystems you have and
you're only specifying /.. I believe you also need to make sure the node (q
node f=d) has a MAXNUMP of 4 (default is 1) to get more then 1 data session
running on a node.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Grems [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 07, 2002 2:52 AM
To: [EMAIL PROTECTED]
Subject: Re: Make Compression Client faster ??

this is my DSM.SYS

--
SErvername TSM_SERVER
   COMMmethod TCPip
   TCPPort1500
   TCPServeraddress   10.100.24.42

compression on
passwordaccess generate
resourceutilization 4
--

when i make a

root:dsmc i / -sub=yes

omly one processeurs is use for compression, why ?



- Original Message -
From: Gerald Wichmann [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, June 06, 2002 7:28 PM
Subject: Re: Make Compression Client faster ??


You sure the process is cpu bound and that it even matters? One would think
the operation is more I/O bound..

Otherwise from redbook using the ba client try adjusting the
resourceutilization paramater:

Resourceutilization
Authorized User
The resourceutilization option regulates the level of resources the
Tivoli Storage Manager server and client can use during processing.
When you request a backup or archive, the client can use more than
one session to the server. The default is to use a maximum of two
sessions; one to query the server and one to send file data. The
client can use only one server session if you specify a
resourceutilization setting of 1. The client is also restricted to a
single session if a user who is not authorized invokes a UNIX client
with passwordaccess=generate specified.
A client can use more than the default number of sessions when
connecting to a server that is Version 3.7 or higher. For example,
resourceutilization=10 permits up to eight sessions with the server.
Multiple sessions may be used for querying the server and sending
file data.
Multiple query sessions are used when multiple file specifications
are used with a backup or archive command. For example, if you
enter:
inc filespaceA filespaceB
and you specified resourceutilization=5, the client may start a
second session to query files on file space B. Whether or not the
second session starts depends on how long it takes to query the
server about files backed up on file space A. The client may also try
to read data from the file system and send it to the server on
multiple sessions.
The following factors can affect the throughput of multiple sessions:
6 The server's ability to handle multiple client sessions. Is there
sufficient memory, multiple storage volumes, and CPU cycles to
increase backup throughput?
6 The client's ability to drive multiple sessions (sufficient CPU,
memory, etc.).
6 The configuration of the client storage subsystem. File systems
that are striped across multiple disks, using either software
striping or RAID-5 can better handle an increase in random read
requests than a single drive file system. Additionally, a single
drive file system may not see performance improvement if it
attempts to handle many random concurrent read requests.
6 Sufficient bandwidth in the network to support the increased
traffic.
Potentially undesirable aspects of running multiple sessions include:
6 The client could produce multiple accounting records.
6 The server may not start enough concurrent sessions. To avoid
this, the server maxsessions parameter must be reviewed and
possibly changed.
6 A query node command may not summarize client activity.
Note: The server can also define this option.

Also from another section:

Note: On occasion, the aggregate data transfer rate may be
higher than the network data transfer rate. This is because
the backup-archive client has multithreading capabilities.
When multiple threads run during backup, the data
transfer time represents the sum time from all threads
running. In this case, aggregate data transfer time is
mistakenly reported as higher. However, when running a
single thread, the aggregate data transfer rate should
always be reported as lower than the network data
transfer rate.



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Grems [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 06, 2002 9:38 AM
To: [EMAIL PROTECTED]
Subject: Make Compression Client faster ??

HI,

When i use DSMC with client compression on, my AIX use only one
Processor for compress Data.
I have 6 processor, how can config my DSMC to use the 6 processor for
compress my data faster..
is it possible ?

Thanks,



Re: Can't do reclamation

2002-06-06 Thread Gerald Wichmann

Assuming this is the pool you're trying to reclaim, at first glance I notice
it doesn't even have a reclamation pool defined. Of course it can't reclaim
any tapes without a reclamation pool. Note the Reclaim Storage Pool:
section which is blank. Changing the reclamation threshold will do nothing
without a reclamation pool.

Have you created a reclamation storage pool? Do you understand the
reclamation process? I highly recommend you read the admin guide section on
reclamation if you don't.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Max Kwong [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 05, 2002 7:41 PM
To: [EMAIL PROTECTED]
Subject: Re: Can't do reclamation

Hi all,

Thanks for all reply. Here is the details of the storage pool that can't
perform
reclamatoin.

Volume Name   Storage  Device  EstimatedPct   Volume
  Pool NameClass Name   Capacity   Util   Status
(MB)
--- --
-  -  
FP0018FILE_PTPOOL  3590CLASS2   21,271.10.1Full
FP0024FILE_PTPOOL  3590CLASS2   21,177.20.0Full
FP0025FILE_PTPOOL  3590CLASS2   22,015.40.0Full
FP0026FILE_PTPOOL  3590CLASS2   21,700.40.0Full
FP0027FILE_PTPOOL  3590CLASS2   21,324.94.3Full
FP0028FILE_PTPOOL  3590CLASS2   21,091.50.0Full
FP0029FILE_PTPOOL  3590CLASS2   21,160.00.0Full
FP0030FILE_PTPOOL  3590CLASS2   21,141.00.0Full
FP0031FILE_PTPOOL  3590CLASS2   21,185.00.0Full

The followin is the result of q stgp file_ptpool f=d

 Storage Pool Name: FILE_PTPOOL
   Storage Pool Type: Primary
   Device Class Name: 3590CLASS2
 Estimated Capacity (MB): 2,051,004,907,651.1
Pct Util: 0.0
Pct Migr: 0.0
 Pct Logical: 100.0
High Mig Pct: 99
 Low Mig Pct: 99
 Migration Delay: 0
  Migration Continue: Yes
 Migration Processes:
   Next Storage Pool:
Reclaim Storage Pool:
  Maximum Size Threshold: No Limit
  Access: Read/Write
 Description:
   Overflow Location:
   Cache Migrated Files?:
  Collocate?: No
   Reclamation Threshold: 80
 Maximum Scratch Volumes Allowed: 99,999,999
   Delay Period for Volume Reuse: 0 Day(s)
  Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
 Volume Being Migrated/Reclaimed:
  Last Update by (administrator): ADMIN
   Last Update Date/Time: 04/29/02 11:44:41


l've already tried to update thereclamation thershold to 50% but it still no
reclaim process can be query.

Max







Gerald Wichmann [EMAIL PROTECTED] on 06/06/2002 12:37:17 AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:(bcc: Max LH KWONG/LCSD/HKSARG)

Subject:  Re: Can't do reclamation



You'll need to provide more information then that.. be a lot more specific.
Specifically why can't you perform reclamation? Did you configure a
reclamation pool? Is it a problem in the configuration of the reclamation
pool? I recommend posting some details/output about the storage pool you're
trying to reclaim (q stg f=d) and the reclamation pool you have setup to do
the reclamation. Also post output on any errors you're getting as to why you
can't do it. The more info you provide the more likely someone can help you.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Max Kwong [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 04, 2002 9:47 PM
To: [EMAIL PROTECTED]
Subject: Can't do reclamation

Hi all,

l have a storage pool can't peform the reclamation. It just only can use the
move data command to manually reclaim the tape. How can l solve this
problem?

Max



Re: Make Compression Client faster ??

2002-06-06 Thread Gerald Wichmann

You sure the process is cpu bound and that it even matters? One would think
the operation is more I/O bound..

Otherwise from redbook using the ba client try adjusting the
resourceutilization paramater:

Resourceutilization
Authorized User
The resourceutilization option regulates the level of resources the
Tivoli Storage Manager server and client can use during processing.
When you request a backup or archive, the client can use more than
one session to the server. The default is to use a maximum of two
sessions; one to query the server and one to send file data. The
client can use only one server session if you specify a
resourceutilization setting of 1. The client is also restricted to a
single session if a user who is not authorized invokes a UNIX client
with passwordaccess=generate specified.
A client can use more than the default number of sessions when
connecting to a server that is Version 3.7 or higher. For example,
resourceutilization=10 permits up to eight sessions with the server.
Multiple sessions may be used for querying the server and sending
file data.
Multiple query sessions are used when multiple file specifications
are used with a backup or archive command. For example, if you
enter:
inc filespaceA filespaceB
and you specified resourceutilization=5, the client may start a
second session to query files on file space B. Whether or not the
second session starts depends on how long it takes to query the
server about files backed up on file space A. The client may also try
to read data from the file system and send it to the server on
multiple sessions.
The following factors can affect the throughput of multiple sessions:
¶ The server's ability to handle multiple client sessions. Is there
sufficient memory, multiple storage volumes, and CPU cycles to
increase backup throughput?
¶ The client's ability to drive multiple sessions (sufficient CPU,
memory, etc.).
¶ The configuration of the client storage subsystem. File systems
that are striped across multiple disks, using either software
striping or RAID-5 can better handle an increase in random read
requests than a single drive file system. Additionally, a single
drive file system may not see performance improvement if it
attempts to handle many random concurrent read requests.
¶ Sufficient bandwidth in the network to support the increased
traffic.
Potentially undesirable aspects of running multiple sessions include:
¶ The client could produce multiple accounting records.
¶ The server may not start enough concurrent sessions. To avoid
this, the server maxsessions parameter must be reviewed and
possibly changed.
¶ A query node command may not summarize client activity.
Note: The server can also define this option.

Also from another section:

Note: On occasion, the aggregate data transfer rate may be
higher than the network data transfer rate. This is because
the backup-archive client has multithreading capabilities.
When multiple threads run during backup, the data
transfer time represents the sum time from all threads
running. In this case, aggregate data transfer time is
mistakenly reported as higher. However, when running a
single thread, the aggregate data transfer rate should
always be reported as lower than the network data
transfer rate.



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Grems [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 06, 2002 9:38 AM
To: [EMAIL PROTECTED]
Subject: Make Compression Client faster ??

HI,

When i use DSMC with client compression on, my AIX use only one
Processor for compress Data.
I have 6 processor, how can config my DSMC to use the 6 processor for
compress my data faster..   
is it possible ?

Thanks,



Re: Can't do reclamation

2002-06-05 Thread Gerald Wichmann

You'll need to provide more information then that.. be a lot more specific.
Specifically why can't you perform reclamation? Did you configure a
reclamation pool? Is it a problem in the configuration of the reclamation
pool? I recommend posting some details/output about the storage pool you're
trying to reclaim (q stg f=d) and the reclamation pool you have setup to do
the reclamation. Also post output on any errors you're getting as to why you
can't do it. The more info you provide the more likely someone can help you.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Max Kwong [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 04, 2002 9:47 PM
To: [EMAIL PROTECTED]
Subject: Can't do reclamation

Hi all,

l have a storage pool can't peform the reclamation. It just only can use the
move data command to manually reclaim the tape. How can l solve this
problem?

Max



Re: Scheduler Question

2002-06-04 Thread Gerald Wichmann

Nope.. been a longtime annoyance of mine personally.. because you have to
setup 3 schedules for DB backups..

e.g.

schedule 1 - full backup on Saturday
schedule 2 - incremental backup on Sunday
schedule 3 - incremental backup on weekdays

Or in your case, 2 schedules

Schedule 1 - backup weekdays
Schedule 2 - backup saturday

Seems to me it'd be an easy thing for Tivoli to improve..

Another thing some people do is have a cronjob or admin schedule that does
an update schedule command to change the parameter at the appropriate time.
But it amounts to the same annoyance.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Luciano Ariceto [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, June 04, 2002 6:03 AM
To: [EMAIL PROTECTED]
Subject: Scheduler Question

Hi

I would like to know if it is possible to setup a scheduler to run from
monday to saturday. As far as I know, weekday is monday to friday, and
weekend is saturday and sunday. Is this possible ?


Thanks a lot

Luciano



Re: Journaling

2002-06-03 Thread Gerald Wichmann

Great for systems with lots of files.. Ran some tests backing up a win2k
server with a million files before and after journaling option was enabled.
Without it enabled it took an hour just to scan through the directory
structure even if nothing needed backing up. With it enabled, the hour-long
backup took less then 5 minutes.. Haven't run into any problems using it
myself but I can't say I've used it extensively (just on a server with lots
of files).

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Adams, Matt (US - Hermitage) [mailto:[EMAIL PROTECTED]]
Sent: Monday, June 03, 2002 7:43 AM
To: [EMAIL PROTECTED]
Subject: Journaling

Can anyone share their thoughts, opinions, war stories on Journaling for
Windows NT4.0 and W2K clients??

Thanks,

Matt Adams
Tivoli Storage Manager Team
Hermitage Site Tech
Deloitte  Touche USA LLP
615.882.6861

- This message (including any attachments) contains confidential information
intended for a specific individual and purpose, and is protected by law.  -
If you are not the intended recipient, you should delete this message and
are hereby notified that any disclosure, copying, or distribution of this
message, or the taking of any action based on it, is strictly prohibited.



DB2 backup problem..

2002-05-31 Thread Gerald Wichmann

I can't find this return code anywhere.. anyone know what this means?
DB2 7.1 EE, TSM 4.2.2 server/client..  both boxes are solaris 2.8


db2 = backup db ds50 online use tsm
SQL2062N An error occurred while accessing media
/opt/db2udb/sqllib/adsm/libadsm.a. Reason code: 2032.

db2 = ? sql2062n

 SQL2062N An error occurred while accessing media media.
  Reason code: reason-code

Explanation:  An unexpected error occurred while accessing a
device, file, TSM or the vendor shared library during the
processing of a database utility.  The following is a list of
reason codes:


1 An attempt to initialize a device, file, TSM or the vendor
shared library failed.

2 An attempt to terminate a device, file, TSM or the vendor
shared library failed.

other If you are using TSM, this is an error code returned by
TSM.

The utility stops processing.

User Response:  Ensure the device, file, TSM or vendor shared
library used by the utility is available and resubmit the utility
command.  If the command is still unsuccessful, contact your
technical service representative.



$ ls -l /opt/db2udb/sqllib/adsm/libadsm.a
-r-r-r-1 bin bin 93664 Apr 17 2001 /opt/db2udb/sqllib/adsm/libadsm.a



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: Compression

2002-05-31 Thread Gerald Wichmann

If a cartridge is FULL, does the estimated capacity include files that have
been expired on that cartridge? E.g. assuming a tape takes time to fill up,
it's possible some of the files on that tape may expire before the tape
reaches FULL status. It's also unlikely the space has yet been reclaimed.
Once the tape reaches FULL status does the estimated capacity include those
files that have expired?

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Bill Boyer [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 1:21 PM
To: [EMAIL PROTECTED]
Subject: Re: Compression

Divide the amount of data on your FULL tape volumes by the native capacity.
Here's a sample SQL statement. You'll need to filter it for your tape
storagepools and only FULL volumes.

select volume_name,cast(est_capacity_mb/ as decimal(3,1)) from volumes

Use these values for :

3590B   10240 (10GB native)
3590E   20480 (20GB native)

Extended length cartriges double the value.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Mahesh Tailor
Sent: Friday, May 31, 2002 3:01 PM
To: [EMAIL PROTECTED]
Subject: Compression


Hello!

Is there any way to find out how much compression I am getting on a IBM 3494
library?

Thanks.

Mahesh



Re: DB2 backup problem..

2002-05-31 Thread Gerald Wichmann

Nope.. my dsm.sys is:

SErvername  TSM
   COMMmethod TCPip
   TCPPort1500
   TCPServeraddress   dev36
   passwordaccess generate
   SCHEDMODe  PROMPT
   maxcmdretries  10
   Nodename   DB2
   schedlogname   /opt/tivoli/tsm/client/ba/bin/dsmsched.log
   schedlogretention  3
   errorlogname   /opt/tivoli/tsm/client/ba/bin/dsmerror.log
   errorlogretention  3

I notice in TSM messages theres this which is similar to what you mentioned:

2032 E DSM_RC_NO_OWNER_REQD
Explanation: PASSWORDACCESS=generate establishes a session with the current
login user as the owner.
System Action: The system returns to the calling procedure.
User Response: When using PASSWORDACCESS=generate, set clientOwnerNameP to
NULL.

Question is, why is this happening? I don't have an owner specified
anywhere..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Dave Canan [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 2:21 PM
To: [EMAIL PROTECTED]
Subject: Re: DB2 backup problem..

These return codes are documented in the dsmrc.h file, located in the
tsm\api\include directory. The return code here means:

 #define DSM_RC_NO_OWNER_REQD   2032 /* owner not allowed.
Allow default */

Do you have an owner specified in the dsm.sys file stanza? You need to
remove it.



At 01:27 PM 5/31/2002 -0700, you wrote:
I can't find this return code anywhere.. anyone know what this means?
DB2 7.1 EE, TSM 4.2.2 server/client..  both boxes are solaris 2.8


db2 = backup db ds50 online use tsm
SQL2062N An error occurred while accessing media
/opt/db2udb/sqllib/adsm/libadsm.a. Reason code: 2032.

db2 = ? sql2062n

  SQL2062N An error occurred while accessing media media.
   Reason code: reason-code

Explanation:  An unexpected error occurred while accessing a
device, file, TSM or the vendor shared library during the
processing of a database utility.  The following is a list of
reason codes:


1 An attempt to initialize a device, file, TSM or the vendor
shared library failed.

2 An attempt to terminate a device, file, TSM or the vendor
shared library failed.

other If you are using TSM, this is an error code returned by
TSM.

The utility stops processing.

User Response:  Ensure the device, file, TSM or vendor shared
library used by the utility is available and resubmit the utility
command.  If the command is still unsuccessful, contact your
technical service representative.



$ ls -l /opt/db2udb/sqllib/adsm/libadsm.a
-r-r-r-1 bin bin 93664 Apr 17 2001 /opt/db2udb/sqllib/adsm/libadsm.a



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

Money is not the root of all evil - full backups are.



DB2 backup problem

2002-05-31 Thread Gerald Wichmann

More info:

In USEREXIT.ERR:



Time of Error: Thu May 30 17:35:48 2002

Parameter Count:  9
Parameters Passed:
ADSM password: fastdb
Database name: DS50
Logfile name:  S024.LOG
Logfile path:  /db2/db201/DS50/db2udb/NODE/SQL1/SQLOGDIR/
Node number:   NODE
Operating system:  Solaris
Release:   SQL07020
Request:   ARCHIVE
Audit Log File:/opt/db2udb/sqllib/db2dump/ARCHIVE.LOG
System Call Parms:
Media Type:ADSM
User Exit RC:  16

 Error isolation: dsmEndTxn() returned 2302 Reason 11

According to TSM Messages 2302 means:

2302 I DSM_RC_CHECK_REASON_CODE
Explanation: After a dsmEndTxn call, the transaction is aborted by either
the server or client with a
DSM_VOTE_ABORT and the reason is returned.
System Action: The system returns to the calling procedure.
User Response: Check the reason field for the code which explains why the
transaction has been aborted.

Ok not very useful info.. Reason 11 means what exactly?

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: DB2 backup problem..

2002-05-31 Thread Gerald Wichmann

Well that was painful but I found the problem after digging around on my
own.. The Backing up DB2 using Tivoli Storage Manager redbook lacks a step
in the section Chapter 6 : Backing up DB2 UDB on the Sun Solaris Platform.
I closely followed the steps there and did it on two different DB2
installations with the same result.. Eventually after trying to troubleshoot
and running out of ideas I read Appendix A Quick start/checklist for
configuration which has a Sun Solaris section. After following the steps in
there I found a section which was never mentioned in the main Installation
section in chapter 6.. mainly this at the end of the quick start:

- Get db cfg for DBNAME.
- Update db cfg for DBNAME using TSM_PASSWORD NULL. (Similar
syntax for other Tivoli Storage Manager parameters. NULL causes the
parameter to be reset to nothing.)


db2 = get db cfg for ds50

relevant portion of output:

TSM management class(TSM_MGMTCLASS) = DB2_MGMTCLASS
 TSM node name(TSM_NODENAME) = ds-s0-99-1
 TSM owner   (TSM_OWNER) = admin
 TSM password (TSM_PASSWORD) = *

db2 = update db cfg for ds50 using TSM_PASSWORD NULL
DB2I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB21026I  For most configuration parameters, all applications must
disconnect
from this database before the changes become effective.

db2 = update db cfg for ds50 using TSM_MGMTCLASS NULL
DB2I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB21026I  For most configuration parameters, all applications must
disconnect
from this database before the changes become effective.

db2 = update db cfg for ds50 using TSM_OWNER NULL
DB2I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB21026I  For most configuration parameters, all applications must
disconnect
from this database before the changes become effective.

db2 = update db cfg for ds50 using TSM_NODENAME NULL
DB2I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB21026I  For most configuration parameters, all applications must
disconnect
from this database before the changes become effective.

db2 = backup db ds50 use tsm

Backup successful. The timestamp for this backup image is : 20020531164955

db2 =

Thus why it was gripping RC 2032 complaining about ownership.. it was a
parameter in DB2 all along..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: William F. Colwell [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 2:32 PM
To: [EMAIL PROTECTED]
Subject: Re: DB2 backup problem..

Gerald,

The return codes for tsm products that use the api are in the
api folder, include folder, dsmrc.h file.  TDPO and db2/udp use the
api.  In this case rc=2032 means --

#define DSM_RC_NO_OWNER_REQD   2032 /* owner not allowed. Allow default
*/

Exactly what to change to fix this?  I have no idea.

Hope this helps,

Bill

At 01:27 PM 5/31/2002 -0700, you wrote:
I can't find this return code anywhere.. anyone know what this means?
DB2 7.1 EE, TSM 4.2.2 server/client..  both boxes are solaris 2.8


db2 = backup db ds50 online use tsm
SQL2062N An error occurred while accessing media
/opt/db2udb/sqllib/adsm/libadsm.a. Reason code: 2032.

db2 = ? sql2062n

 SQL2062N An error occurred while accessing media media.
  Reason code: reason-code

Explanation:  An unexpected error occurred while accessing a
device, file, TSM or the vendor shared library during the
processing of a database utility.  The following is a list of
reason codes:


1 An attempt to initialize a device, file, TSM or the vendor
shared library failed.

2 An attempt to terminate a device, file, TSM or the vendor
shared library failed.

other If you are using TSM, this is an error code returned by
TSM.

The utility stops processing.

User Response:  Ensure the device, file, TSM or vendor shared
library used by the utility is available and resubmit the utility
command.  If the command is still unsuccessful, contact your
technical service representative.



$ ls -l /opt/db2udb/sqllib/adsm/libadsm.a
-r-r-r-1 bin bin 93664 Apr 17 2001 /opt/db2udb/sqllib/adsm/libadsm.a



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



select formatting

2002-05-30 Thread Gerald Wichmann

The below output is from a simple select statement. I hate to ask this again
but its been a while since I've done it and I forget how to format the
columns. What I need to do is extract the node names from the backup server
so that I can then perform operations on them. How do I increase the width
of the column to get the desired effect? I suppose set sqldisplaymode would
do it but I'm actually running the select query from a perl script via a
dsmadmc command.

i.e.:

RESULT = `/opt/Tivoli/tsm/client/admin/bin/dsmadmc -id=admin -pa=admin
select node_name from nodes`;

returns the below when what I want is to store the node names in an array

Obtaining list of nodes on backup server..
Tivoli Storage Manager
Command Line Administrative Interface - Version 4, Release 2, Level 2.0
(C) Copyright IBM Corporation, 1990, 2001, All Rights Reserved.

Session established with server SERVER1: Solaris 7/8
  Server Version 4, Release 2, Level 2.0
  Server date/time: 05/29/02   23:05:26  Last access: 05/29/02   22:54:56

ANS8000I Server command: 'select node_name from nodes'

NODE_NAME
--
ORION-41.8-10.0.1-
 04.1
ORION-41.8-10.0.1-
 72.1
ORION-41.8-10.0.1-
 72.3
ORION-41.8-10.0.1-
 72.4
ORION-41.8-10.0.1-
 72.5
ORION-41.8-10.0.1-
 72.7
ORION-41.8-10.0.2-
 04.1
ORION-41.8-10.0.2-
 04.2
ORION-41.8-10.0.2-
 04.3
ORION-41.8-10.0.2-
 04.4
ORION-41.8-10.0.2-
 04.5
ORION-41.8-10.0.2-
 04.7

ANS8002I Highest return code was 0.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: allocating disk volumes on RAID5 array

2002-05-28 Thread Gerald Wichmann

Yes I'm well aware of the different pro's and con's to using various levels
of RAID vs non-RAID. In my application protection is paramount and mirroring
is simply too wasteful to use. I've used RAID5 repeatedly in the past in
regards to TSM and have always been very happy with the results. So the
issue here isn't really what to use or not but rather whether there's any
pro's or con's on the way you go about creating volumes on a RAID5 array.
E.g. lots of smaller volumes or fewer large volumes? Having lots of RAID5
arrays as was also suggested isn't really practical because these days it's
rare you don't have fairly large disks (18 or 36GB each) so in that example
of 100GB you're really only talking 1 RAID5 array of 4-5 disks. 

Bottom line is I was just speculating out loud perhaps on whether there were
any pro's or con's to how many volumes and what size one would make the
volumes on a RAID5 array. Say you had a 100GB RAID5 array. Would you create
10 10GB volumes or 2 50GB volumes? Does it matter since it's all just going
into a big array?

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Gianluca Perilli [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 28, 2002 1:35 PM
To: [EMAIL PROTECTED]
Subject: Re: allocating disk volumes on RAID5 array

Hi Gerald,

I think you have to consider that if you use RAID5 logical drives, you have
to calculate and write a parity every time you write any data on the disk:
so if you have a write intensive application, RAID5 is not so efficient as
other RAID protections (1,10, etc); if you have instead read-intensive
applications RAID 5 is a good choice because it gives you the possibility
to use more physical drives concurrently.
Furthermore RAID 5 is the most efficient protection regarding the optimal
usage of the available phisical capacity, and it is more and more efficient
as the number of physical drives in the array increase; but at the same
time as the number of physical drives increase, the performance goes down
(because you have to calculate the parity on a larger number of blocks):
probably the best compromise is a number of 7/8 disk drives/array.
I hope this helps.



Cordiali saluti / Best regards

Gianluca Perilli



Gianluca Perilli
Tivoli Customer Support
Via Sciangai n° 53 - 00144 Roma (Italy)
Tel. 06/5966 - 4581
Cell. 335/7840985


 

  Gerald Wichmann

  gwichman@ZANTAZ.To:
[EMAIL PROTECTED]

  COM cc:

  Sent by: ADSM:  Subject:  allocating disk
volumes on RAID5 array   
  Dist Stor

  Manager

  [EMAIL PROTECTED]

  .EDU

 

 

  28-05-02 21.14

  Please respond to

  ADSM: Dist Stor

  Manager

 

 




Since a RAID-5 array shows up as one big filesystem, what's the best
strategy for determining how many and of what size disk pool volumes to
create for your primary disk storage pool? For the most part I don't think
it really matters unlike allocating volumes on individual disks but perhaps
I'm not considering something.

Thanks..



Re: allocating disk volumes on RAID5 array

2002-05-28 Thread Gerald Wichmann

That's more along the lines of what kind of info I'm digging for - but
doesn't quite address how one goes about coming up with a number or size.
There may not be a right or single answer however you must admit there
is a generalized right answer. Consider for example that while RAID-5
arrays can vary in the # of disks assigned and size of them, there is a
generalized rule of thumb as Cordiali pointed out. Too few or too many disks
in your RAID-5 array and you can have performance implications. There's a
sort of sweet spot for creating RAID-5 arrays and keeping that in mind there
should also be a similar sweet spot in how many volumes one might want to
assign. All in all though I still wonder if it really matters a whole lot..
Whether you have 1 write going on or 10, you're still striping across the
array and I question whether you'd really see much difference one way or
another. Might be an interesting thing to try various variants of..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 28, 2002 2:38 PM
To: [EMAIL PROTECTED]
Subject: Re: allocating disk volumes on RAID5 array

Well yes, it does matter.
Trouble is, there is no right answer.
It's a performance thing.

Assuming you have multiple clients backing up concurrently to your disk
pool, TSM will start as many I/Os as there are clients sending data, up to
the number of  TSM volumes in your disk pool.

If you have more volumes, then you get more I/O's in flight concurrently.
That's a good thing and will improve performance, until you get too many
in flight, then the effect of yanking the heads around degrades performance.

It's even harder to figure out what is optimal in a RAID situation, since
you don't have a 1-to-1 correspondendence between your TSM volumes and
physical disks.  And most RAID setups have some cache that acts as a buffer,
and that helps improve performance but further disassociates the number of
concurrent writes from the number of physical disks.

So think about it this way:  How many concurrent WRITES do you want to occur
in that RAID pool?  Pick a number, and create that many TSM volumes.



-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 28, 2002 5:01 PM
To: [EMAIL PROTECTED]
Subject: Re: allocating disk volumes on RAID5 array


Yes I'm well aware of the different pro's and con's to using various levels
of RAID vs non-RAID. In my application protection is paramount and mirroring
is simply too wasteful to use. I've used RAID5 repeatedly in the past in
regards to TSM and have always been very happy with the results. So the
issue here isn't really what to use or not but rather whether there's any
pro's or con's on the way you go about creating volumes on a RAID5 array.
E.g. lots of smaller volumes or fewer large volumes? Having lots of RAID5
arrays as was also suggested isn't really practical because these days it's
rare you don't have fairly large disks (18 or 36GB each) so in that example
of 100GB you're really only talking 1 RAID5 array of 4-5 disks.

Bottom line is I was just speculating out loud perhaps on whether there were
any pro's or con's to how many volumes and what size one would make the
volumes on a RAID5 array. Say you had a 100GB RAID5 array. Would you create
10 10GB volumes or 2 50GB volumes? Does it matter since it's all just going
into a big array?

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Gianluca Perilli [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 28, 2002 1:35 PM
To: [EMAIL PROTECTED]
Subject: Re: allocating disk volumes on RAID5 array

Hi Gerald,

I think you have to consider that if you use RAID5 logical drives, you have
to calculate and write a parity every time you write any data on the disk:
so if you have a write intensive application, RAID5 is not so efficient as
other RAID protections (1,10, etc); if you have instead read-intensive
applications RAID 5 is a good choice because it gives you the possibility
to use more physical drives concurrently.
Furthermore RAID 5 is the most efficient protection regarding the optimal
usage of the available phisical capacity, and it is more and more efficient
as the number of physical drives in the array increase; but at the same
time as the number of physical drives increase, the performance goes down
(because you have to calculate the parity on a larger number of blocks):
probably the best compromise is a number of 7/8 disk drives/array.
I hope this helps.



Cordiali saluti / Best regards

Gianluca Perilli



Gianluca Perilli
Tivoli Customer Support
Via Sciangai n0 53 - 00144 Roma (Italy)
Tel. 06/5966 - 4581
Cell. 335/7840985




  Gerald Wichmann

  gwichman@ZANTAZ.To:
[EMAIL PROTECTED]

  COM cc

Re: co question

2002-05-23 Thread Gerald Wichmann

Perhaps I don't understand your requirements. If in your example, a file is
backed up on the 1st of the month. Then the file is updated and backed up on
the 1st of the 2nd month, the 1st copy will become an inactive copy and the
newly backed up file will become the active copy. Due to the RETE parameter,
the inactive version will be kept an additional 30 days and then expired
since it's now been retained for longer then the RETE parameter. So actually
in your scenario an individual file will have technically been kept for 61
days - first for 31 days because it never changed, then kept 30 days once it
became inactive because it did change (the inactive copy.. active always
sticks around because it's the most recent version). However the settings I
provided would allow the user to be able to ALWAYS go back 30 days
guaranteed, and no file would really be older then 30 days.

You couldn't have files that are 30 years old as you suggest..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 23, 2002 9:37 AM
To: [EMAIL PROTECTED]
Subject: Re: co question

Yes, but in theory, you could have files that are 30 years old.  Or for that
matter, 300, or 3000 or nn years old.  e.g. if the user creates a file
on the first of the month, and updates on the
first of every month, on month 31, you'll have copies that are 30 months
old.  User doesn't want that.

-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 23, 2002 12:26 PM
To: [EMAIL PROTECTED]
Subject: Re: co question


I would do:

VDE nolimit
VDD nolimit
REV 30
ROV 30

That way you guarantee anything he backs up, be it via scheduled backup or
adhoc backup, will always be retained for 30 days..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 23, 2002 8:29 AM
To: [EMAIL PROTECTED]
Subject: co question

User wants his data for 30 days, no more, no less.  Don't want to Archive
due to capacity issues... tape, disk, network...

Will this due?

VDE 30
VDD 30
REV 30
ROV 30



Re: problem with tape library on solaris

2002-05-22 Thread Gerald Wichmann

Yep.. I should've posted that output as well. I've confirmed that other
processes also cause the same error (e.g. migrations). Guess I'll call IBM
this morning and see what they have to say. I probably forgot something
somewhere..

tsm: SERVER1q devc f=d

 Device Class Name: DC.TAPE.LIB_TAPE_1
Device Access Strategy: Sequential
Storage Pool Count: 1
   Device Type: LTO
Format: ULTRIUMC
 Est/Max Capacity (MB): 102,400.0
   Mount Limit: DRIVES
  Mount Wait (min): 60
 Mount Retention (min): 60
  Label Prefix: ADSM
   Library: LIB_TAPE_1
 Directory:
   Server Name:
  Retry Period:
Retry Interval:
Shared:
Last Update by (administrator): ADMIN
 Last Update Date/Time: 05/12/02   01:14:47

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Samiran Das [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 20, 2002 10:26 PM
To: [EMAIL PROTECTED]
Subject: Re: problem with tape library on solaris

did you check output of q devc f=d? What is the mount limit and do you
have sufficient mount point?

Samiran Das




Gerald
Wichmann To: [EMAIL PROTECTED]
gwichman@ZANcc:
TAZ.COM Subject: problem with tape
library on solaris
Sent by:
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
RIST.EDU


05/20/2002
09:47 PM
Please
respond to
ADSM: Dist
Stor Manager





Solaris 8, Sun E250, ATL500 tape library (2 drives), TSM 4.2.2

First time getting a tape library working on solaris so I could be doing
something wrong.. this is a test system. anyways it all seemed to work
until
I tried using the library:

05/21/02   17:01:26ANR2017I Administrator ADMIN issued command: BACKUP
DB
dev=dc.tape.lib_tape_1 type=f

05/21/02   17:01:27ANR0984I Process 2 for DATABASE BACKUP started in
the

BACKGROUND at 05:01:27 PM.

05/21/02   17:01:27ANR2280I Full database backup started as process 2.

05/21/02   17:01:47ANR4571E Database backup/restore terminated -
insufficient
number of mount points available for removable
media.
05/21/02   17:01:47ANR0985I Process 2 for DATABASE BACKUP running in
the

BACKGROUND completed with completion state FAILURE
at
05:01:47 PM.

tsm: SERVER1q libr

  Library Name: LIB_TAPE_1
  Library Type: SCSI
Device: /dev/rmt/7lb
  Private Category:
  Scratch Category:
  External Manager:
Shared: No
   LanFree:
ObeyMountRetention:

tsm: SERVER1q dr f=d

Library Name: LIB_TAPE_1
  Drive Name: LIB_DRIVE_1
 Device Type: DLT
 On-Line: Yes
  Device: /dev/rmt/0mt
 Element: 16
Allocated to:
  Last Update by (administrator): ADMIN
   Last Update Date/Time: 05/21/02   17:00:43
Cleaning Frequency (Gigabytes/ASNEEDED/NONE): NONE

Library Name: LIB_TAPE_1
  Drive Name: LIB_DRIVE_2
 Device Type: DLT
 On-Line: Yes
  Device: /dev/rmt/6mt
 Element: 17
Allocated to:
  Last Update by (administrator): ADMIN
   Last Update Date/Time: 05/21/02   17:00:58
Cleaning Frequency (Gigabytes/ASNEEDED/NONE): NONE

bash-2.05# ls -l /dev/rmt
total 36
lrwxrwxrwx   1 root root  44 May 12 01:13 0mt -
../../devices/pci@1f,4000/scsi@2,1/mt@1,0:mt
lrwxrwxrwx   1 root root  45 May 12 01:13 0mtn -
../../devices/pci@1f,4000/scsi@2,1/mt@1,0:mtn
lrwxrwxrwx   1 root root  45 May 12 01:13 0mtt -
../../devices/pci@1f,4000/scsi@2,1/mt@1,0:mtt
lrwxrwxrwx   1 root root  42 May 12 01:14 1op -
../../devices/pci@1f,4000/scsi@5/op@2,0:op
lrwxrwxrwx   1 root root  43 May 12 01:14 1opt -
../../devices/pci@1f,4000/scsi@5/op@2,0:opt
lrwxrwxrwx   1 root root  42 May 12 01:14 2op -
../../devices/pci@1f,4000/scsi@5/op@3,0:op
lrwxrwxrwx   1 root root  43 May 12 01:14 2opt -
../../devices/pci@1f,4000/scsi@5/op@3,0:opt
lrwxrwxrwx   1 root root  42 May 12

Re: problem with tape library on solaris

2002-05-22 Thread Gerald Wichmann

The devc mount limit is drives.. there's no stuck tapes and I've even
stop/started the tsm server.. funny thing is it checks in tapes fine and
does an audit library fine and checks out tapes fine. That indicates the
drives work fine. It just kicks out that msg when you try to migrate or do
db backups or somehow use the tapes. I've got a ticket open with Tivoli
support and level 2 is having me do some traces. Doesn't look like there's
anything wrong with the config itself.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Scott, Brian [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 21, 2002 2:02 PM
To: [EMAIL PROTECTED]
Subject: Re: problem with tape library on solaris

Gerald,

Take a look at your device class (q devc f=d) and check the mount limit
setting. Is it set to anything other than Drives?  Also, was anything else
mounted at the time that wasn't being registered by TSM? Stuck tape??

Hope this helps...

Brian Scott
EDS - Enterprise Distributed Capabilities
MS 3278
Troy, MI 48098

* phone: 248-265-4596 (8-365)
* mailto:[EMAIL PROTECTED]



-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 20, 2002 12:18 PM
To: [EMAIL PROTECTED]
Subject: problem with tape library on solaris


Solaris 8, Sun E250, ATL500 tape library (2 drives), TSM 4.2.2

First time getting a tape library working on solaris so I could be doing
something wrong.. this is a test system. anyways it all seemed to work until
I tried using the library:

05/21/02   17:01:26ANR2017I Administrator ADMIN issued command: BACKUP
DB
dev=dc.tape.lib_tape_1 type=f

05/21/02   17:01:27ANR0984I Process 2 for DATABASE BACKUP started in the

BACKGROUND at 05:01:27 PM.

05/21/02   17:01:27ANR2280I Full database backup started as process 2.

05/21/02   17:01:47ANR4571E Database backup/restore terminated -
insufficient
number of mount points available for removable
media.
05/21/02   17:01:47ANR0985I Process 2 for DATABASE BACKUP running in the

BACKGROUND completed with completion state FAILURE
at
05:01:47 PM.

tsm: SERVER1q libr

  Library Name: LIB_TAPE_1
  Library Type: SCSI
Device: /dev/rmt/7lb
  Private Category:
  Scratch Category:
  External Manager:
Shared: No
   LanFree:
ObeyMountRetention:

tsm: SERVER1q dr f=d

Library Name: LIB_TAPE_1
  Drive Name: LIB_DRIVE_1
 Device Type: DLT
 On-Line: Yes
  Device: /dev/rmt/0mt
 Element: 16
Allocated to:
  Last Update by (administrator): ADMIN
   Last Update Date/Time: 05/21/02   17:00:43
Cleaning Frequency (Gigabytes/ASNEEDED/NONE): NONE

Library Name: LIB_TAPE_1
  Drive Name: LIB_DRIVE_2
 Device Type: DLT
 On-Line: Yes
  Device: /dev/rmt/6mt
 Element: 17
Allocated to:
  Last Update by (administrator): ADMIN
   Last Update Date/Time: 05/21/02   17:00:58
Cleaning Frequency (Gigabytes/ASNEEDED/NONE): NONE

bash-2.05# ls -l /dev/rmt
total 36
lrwxrwxrwx   1 root root  44 May 12 01:13 0mt -
../../devices/pci@1f,4000/scsi@2,1/mt@1,0:mt
lrwxrwxrwx   1 root root  45 May 12 01:13 0mtn -
../../devices/pci@1f,4000/scsi@2,1/mt@1,0:mtn
lrwxrwxrwx   1 root root  45 May 12 01:13 0mtt -
../../devices/pci@1f,4000/scsi@2,1/mt@1,0:mtt
lrwxrwxrwx   1 root root  42 May 12 01:14 1op -
../../devices/pci@1f,4000/scsi@5/op@2,0:op
lrwxrwxrwx   1 root root  43 May 12 01:14 1opt -
../../devices/pci@1f,4000/scsi@5/op@2,0:opt
lrwxrwxrwx   1 root root  42 May 12 01:14 2op -
../../devices/pci@1f,4000/scsi@5/op@3,0:op
lrwxrwxrwx   1 root root  43 May 12 01:14 2opt -
../../devices/pci@1f,4000/scsi@5/op@3,0:opt
lrwxrwxrwx   1 root root  42 May 12 01:14 3op -
../../devices/pci@1f,4000/scsi@5/op@4,0:op
lrwxrwxrwx   1 root root  43 May 12 01:14 3opt -
../../devices/pci@1f,4000/scsi@5/op@4,0:opt
lrwxrwxrwx   1 root root  42 May 12 01:14 4op -
../../devices/pci@1f,4000/scsi@5/op@5,0:op
lrwxrwxrwx   1 root root  43 May 12 01:14 4opt -
../../devices/pci@1f,4000/scsi@5/op@5,0:opt
lrwxrwxrwx   1 root root  42 May 12 01:14 5lb -
../../devices/pci@1f,4000/scsi@5/lb@6,0:lb
lrwxrwxrwx   1 root root  43 May 12 01:14 5lbt -
../../devices/pci@1f,4000/scsi@5/lb@6,0:lbt
lrwxrwxrwx   1

Re: AIX Backups on TSM

2002-05-16 Thread Gerald Wichmann

If it's truly a filesystem you'll want to use exclude.fs (e.g. /usr /tmp
/home etc)

If it's not a filesystem (it's a directory) you'll want to use
VIRTUALMOUNTPOINT and then exclude.fs

The above methods will make it so the scheduler doesn't even scan the
directories and their subdir's. This may or may not make your backups run
faster if there are a lot of files in those directories. Since it's db2 I
suspect there aren't and in this case it probably won't have a huge impact
on speed.

Using normal excludes what you're probably after is:

exclude /db2/PRD/sapdata/.../*
exclude /db2/PRD/sapdata/*

The scheduler will still scan the directories and files since you're not
using an exclude.fs and it will still backup the directory structure however
it will NOT back up any files in the directory structure and that's what
most people are usually after.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Crawford, Lindy [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 16, 2002 4:04 AM
To: [EMAIL PROTECTED]
Subject: AIX Backups on TSM

HI TSMers

I have put an exlcude statement in my dsm.sys file on AIX 4.3.3, TSM 4.2.1.
It is suppose to exclude filesystems, but it is still backing it
upPlease check below and verify the dsm.sys option for me:-


SErvername  boetsm1n



   COMMmethod TCPip

   TCPPort1500

   TCPServeraddress 163.199.130.6

 PASSWORDACCESS GENERATE



tcpwin 64

tcpbuf 32

txnbytel 2097152

errorlogret 14

schedlogret 14

exclude /db2/PRD/sapdata*

As you can see above I want to exclude all the sapdata filesystems...I've
even tried listing them individually but for some reason it is still backing
it up.


HELP


 Lindy Crawford
 Business Solutions: IT
 BoE Corporate

 * +27-31-3642185
 +27-31-3642946
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]




WARNING:
Any unauthorised use or interception of this email is illegal. If this email
is not intended for you, you may not copy, distribute nor disclose the
contents to anyone. Save for bona fide company matters, the BoE Group does
not accept any responsibility for the opinions expressed in this email.
For further details please see: http://www.nbs.co.za/emaildisclaim.htm



Re: Shutting down TSM on AIX

2002-05-16 Thread Gerald Wichmann

Kill command is ok however keep in mind that you'll want to check what the
TSM server is doing first before you do it. Make sure any backups/restores
can be cancelled (and the appropriate people informed) as well as any other
processes like migration and so forth. According to the TSM manual you
should do:

Disable sessions - prevents new clients from accessing TSM put permits
existing sessions to continue
Query sessions - check if any sessions are currently running
Cancel session - cancel above sessions as appropriate
Query process - check for any running processes
Cancel process - cancel them as appropriate
Then when the TSM server is quiet and you're ready to shut it down:
Halt

When you bring it back up use enable sessions to allow clients to access
TSM again..

Are the above necessary? No not really.. doing a halt outright or sending a
kill signal to the process is going to cause the TSM server to cancel
everything anyways.. So if you know it's ok to do so then go for it. But I
usually do a quick check and quiet the TSM server personally..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: David E Ehresman [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 16, 2002 7:01 AM
To: [EMAIL PROTECTED]
Subject: Shutting down TSM on AIX

What is the preferred method of shutting down the TSM server on a AIX
box when AIX is being rebooted?  Is a kill command ok or is there a
kinder way to shutdown TSM without using dsmadmc?

David



Re: dsmc sched as another user

2002-05-16 Thread Gerald Wichmann

Ya good point and I thought of that. Fortunately it's not a big issue here.
The later suggestion about creating a program and setting SUID doesn't work.
At least not a ksh script..That was the first thing I tried. So far only
sudo works..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Thomas Denier [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 16, 2002 8:34 AM
To: [EMAIL PROTECTED]
Subject: Re: dsmc sched as another user

 Try using sudo.
 You can allow your non-root user execute only the dsmc command as root.

I think this would allow the non-root user to execute dsmc as root with
any operands, not just the 'sched' operand. This would be a serious
security exposure. The non-root user could replace any file on the system
with a copy of a different file or with an older version of the same file.
If the non-root user had root permission on any other Unix client system
the user could back up an arbitrary file there and restore it on the
system where he or she was a non-root user.

As far as I know, the only really safe way to do this is to write a
program specifically to start the scheduler and make that program
root owned, SUID, and executable by the user who needs to start the
scheduler. Many Unix systems even today have a bug that makes SUID
scripts dangerous. Unless you are certain that this bug is fixed on
your system you will need to write the program in C or some other
compiled language.



Re: Client scheduler

2002-05-15 Thread Gerald Wichmann

I would recommend doing basic troubleshooting.. check the TSM server
activity log around the time the backup window occurred (e.g. - q act
startt=01:00). What was the TSM server doing if not backing anyone up? Was
it trying to contact the clients and failing to (there would be repeated
contacting client blahblah msgs if it was). Is there nothing there except
a client XX missed backup window at the end of the window?

Ditto check the dsmsched.log on some of the clients. What were they doing
around the supposed backup time? It should say clearly when they got their
last schedule and what it was.

Are your clients in prompted or polling mode? The reason for the missed
backups could vary depending on what mode you use. If polling, what was the
interval the clients poll at (default is 12 hours)?

By looking at the logs you should be easily able to determine if it was a
connectivity problem or if the clients simply missed their schedules due to
an error in timing. You should also be able to see if the clients are
receiving the schedules and what time they individually think they're
supposed to being their backups. Some basic troubleshooting goes a long
way..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 15, 2002 2:29 PM
To: [EMAIL PROTECTED]
Subject: Re: Client scheduler

TSM server had MST and I changed to EST, CUT -5, but the result was
wrong one. So, I went to SMIT on AIX and changed time only without
rebooting again. It looked fine and TSM was getting the right time.
You're probably right on that, but I don't know how to calculate the
difference in time. Do baclients need to be restarted after TSM server
having been rebooted ? If not, as you said, I'll wait tonight to see if
client schedules work. I just want to make sure that I don' need to do
anything else. Thanks for your help.



Jin Bae Chi (Gus)
Data Center
614-287-2496
614-287-5488 Fax
e-mail: [EMAIL PROTECTED]


 [EMAIL PROTECTED] 05/15/02 05:07PM 
Did the resulting change in time on the TSM server mean that the
schedules
were now past their startup window?

Eg.
01:00 change time to 04:00
Client schedule is set to start at 02:00 - has 1 hour startup window

In this case, the schedules would be misssed because TSM server was
down
(sort of) between 01:00 and 04:00

-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 15, 2002 3:35 PM
To: [EMAIL PROTECTED]
Subject: Client scheduler


Hi, all,

Last night I had to reboot the AIX on which TSM was running because I
changed time zone on AIX.

I made sure of no session and no process running and disabled sessions
and 'halted' TSM. and shutdown and reboot AIX box. TSM came up
automatically. I enabled sessions and accepted date. Well, everything
looked normal.

When I checked this morning, all client scheduler had been MISSED.
What
did I do wrong? Do I need to start admin scheduler from server
manually?
how? Any comment will be appreciated. Thanks





Jin Bae Chi (Gus)
Data Center
614-287-2496
614-287-5488 Fax
e-mail: [EMAIL PROTECTED]



Re: Client scheduler

2002-05-15 Thread Gerald Wichmann

There isn't any reason to reboot the clients.. At worst all you need to do
is stop/start the dsmc schedule process. How you go about that is dependent
on the operating system. On NT/2000 it's a service you just click stop and
start. On Unix you kill the dsmc sched process and restart it. Regardless
it's highly unlikely that needs to be done for something like a time change.

The output sent is relatively useless. It shows clients connecting to the
server successfully and doing something. What are they doing? Who knows..
Maybe they're polling the server for a schedule. Maybe they're actually
performing a backup. I don't know because I only see a 17 minute timeframe
of your activity log. You need to look at the entire activity log for your
backup window. Since I don't know how big your scheduled backup window is I
don't know how much that would be either. Bottom line is it'd be
inappropriate to post it on this list anyways so don't do it.

In order to really know whats going on between a given client and the TSM
server you also need to look at the client's dsmsched.log file. Comparing
that with the same window of time on the TSM server's activity log should
paint a real clear picture of what the two were doing and why the schedule
was missed. Do not post any more logs here. You need to look at them
yourself and troubleshoot.

It's highly likely the backups will run fine tonight and it was simple a
discrepancy in time as was mentioned previously. There are a number of ways
this can occur but the fact that you are in polling mode with the interval
set to 12 hours makes it even more likely.

If this is not something you feel comfortable solving yourself then pick up
the phone and call Tivoli support. That's what they're there for and they'll
figure it out quicker then doing so here.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 15, 2002 5:03 PM
To: [EMAIL PROTECTED]
Subject: Re: Client scheduler

Thanks for giving me a good insight. I'm attaching the actlog from
00:00.(I'm sorry if it's too long). We have client polling every 12
hours. It looks like the client started and couldn't connect with the
server and ended inmmediately.
Today I did a test on one of client node. I rebooted the client node
and ran its schedule and it went OK. I suspect all other nodes need to
be restarted or rebooted to able to connect to the server with new time.
If that's the case, there are too many nodes that need to be rebooted.
Please correct me if I'm wrong. Thanks again.


05/15/02 00:00:12 ANR0403I Session 414 ended for node COMPAQ2
(Win95).
05/15/02 00:00:15 ANR0407I Session 422 started for administrator
JBAECHI
   (WebBrowser) (HTTP 10.2.11.119(1251)).

05/15/02 00:00:15 ANR2017I Administrator JBAECHI issued command:
QUERY
   ACTLOG

05/15/02 00:00:15 ANR0405I Session 422 ended for administrator
JBAECHI
   (WebBrowser).

05/15/02 00:01:22 ANR0406I Session 434 started for node COMPAQ2
(Win95)
   (Tcp/Ip 10.2.11.119(1263)).

05/15/02 00:01:24 ANR0406I Session 435 started for node COMPAQ2
(Win95)
   (Tcp/Ip 10.2.11.119(1264)).

05/15/02 00:01:30 ANR2562I Automatic event record deletion started.

05/15/02 00:01:30 ANR2565I 0 schedules for immediate client actions
have
   been deleted.

05/15/02 00:01:30 ANR2563I Removing event records dated prior to
05/08/02
   00:00:00.

05/15/02 00:01:30 ANR2564I Automatic event record deletion ended -
40
   records deleted.

05/15/02 00:02:32 ANR0403I Session 434 ended for node COMPAQ2
(Win95).
05/15/02 00:02:33 ANR0403I Session 435 ended for node COMPAQ2
(Win95).
05/15/02 00:02:47 ANR0406I Session 436 started for node COMPAQ2
(Win95)
   (Tcp/Ip 10.2.11.119(1265)).

05/15/02 00:02:51 ANR0403I Session 436 ended for node COMPAQ2
(Win95).
05/15/02 00:05:33 ANR0406I Session 437 started for node DUBLIN
(NetWare)
   (Tcp/Ip 10.15.1.5(3003)).

05/15/02 00:05:33 ANR0403I Session 437 ended for node DUBLIN
(NetWare).
05/15/02 00:05:33 ANR0406I Session 438 started for node DUBLIN
(NetWare)
   (Tcp/Ip 10.15.1.5(3004)).

05/15/02 00:05:33 ANR0403I Session 438 ended for node DUBLIN
(NetWare).
05/15/02 00:06:38 ANR0406I Session 439 started for node COMPAQ2
(Win95)
   (Tcp/Ip 10.2.11.119(1266)).

05/15/02 00:06:40 ANR0403I Session 439 ended for node COMPAQ2
(Win95).
05/15/02 00:10:59 ANR0406I Session 440 started for node BOLTON
(NetWare)
   (Tcp/Ip 10.13.1.5(1842)).

05/15/02 00:11:00 ANR0403I Session 440 ended for node BOLTON
(NetWare).
05/15/02 00:11:00 ANR0406I Session 441 started for node BOLTON
(NetWare

dsmc sched as another user

2002-05-15 Thread Gerald Wichmann

On linux when starting the dsmc sched process you need to be root. Otherwise
it says ANS1817E Schedule function can only be run by a TSM authorized
user.

I'm trying to write a script that gets run by a non-root user to start the
scheduler. Is it possible to get around this limitation somehow? I've tried
chmod 4755 on dsmc and even that won't work. Looking up ANS1817E in the
messages guide doesn't yield any useful information either.

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



solaris tsm server

2002-05-13 Thread Gerald Wichmann

I'm not all that familiar with running TSM on solaris.. trying to configure
an ATL tape library on a test box I have here. When I define the library it
crashes the TSM server:

05/14/02 17:58:01  ANR2017I Administrator ADMIN issued command: DEFINE
LIBR
lib_tape_1 libt=scsi devi=/dev/rmt/0mt

05/14/02 18:09:14  ANRD mmstxn.c(219): ThreadId42 Lock acquisition

(sLock) failed for MMS universe lock.

05/14/02 18:09:14  ANR2033E QUERY LIBRARY: Command failed - lock
conflict.
05/14/02 18:09:14  ANRD mmstxn.c(219): ThreadId40 Lock acquisition

(sLock) failed for MMS universe lock.

05/14/02 18:09:14  ANR2033E QUERY LIBRARY: Command failed - lock
conflict.

05/14/2002 18:12:57  ANR7824S Server operation terminated.
05/14/2002 18:12:57  ANR7823S Internal error LOCKCYCLE02 detected.
05/14/2002 18:12:57  ANRD Trace-back of called functions:
05/14/2002 18:12:57  ANRD   0x0001000837CC  pkAbort
05/14/2002 18:12:57  ANRD   0x000100083858  pkLogicAbort
05/14/2002 18:12:57  ANRD   0x0001009865EC  CheckLockCycles
05/14/2002 18:12:57  ANRD   0x000100982080  TmFindDeadlock
05/14/2002 18:12:57  ANRD   0x000100981D60  TmDeadlockDetector
05/14/2002 18:12:57  ANRD   0x000100087818  StartThread
05/14/2002 18:12:57  ANRD   0x7EC1F844  *UNKNOWN*
05/14/2002 18:12:57  ANRD   0x000100087710  StartThread

my question is whats going on here? It's possible I suppose I might have the
device wrong.. It's fairly likely I don't have the ATL library device driver
loaded as well. How would I check this on solaris? How do I determine what
device the library and it's drives are? I'm guessing that's probably my
problem here (a lack of solaris knowledge). Appreciate any feedback.



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



starting tsm server on solaris

2002-05-13 Thread Gerald Wichmann

dsmserver:234:once:/opt/tivoli/tsm/server/bin/dsmserv /dev/null 21

Put that line into /etc/inittab but on reboot the TSM server doesn't come
up.. I have to cd into the bin directory and manually start it.. I notice
you have to set an environmental variable to start the tsm server from
another directory other then it's installed directory but in order for it to
work in inittab, I'm not sure where to put the environmental variable nor
even if that's why it isn't starting (though I suspect it is why).

Appreciate any help. Not used to solaris..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: JDBC-ODBC Bridge and TSM ODBC driver

2002-05-10 Thread Gerald Wichmann

With IBM's pro-java stance I've always found it odd that TSM doesn't have
better JDBC support. It really makes it difficult for a company to write in
house code to query the TSM server DB for info and use it. Outside of using
SNMP software and decision support to monitor/report a TSM server why
doesn't IBM support some more direct means of doing so like via the use of
JDBC support? I've always found that odd...

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Andrew Raibeck [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 10, 2002 12:40 PM
To: [EMAIL PROTECTED]
Subject: Re: JDBC-ODBC Bridge and TSM ODBC driver

You need to use the JDBC -- ODBC bridge. There is no JDBC driver for
TSM, nor do we plan on implementing one at this time.

Which version of the ODBC driver are you using? The only ones that I know
of that work (to at least some minimal degree) are 5.1.0.1 and now
4.2.2.0. Any future versions should also work. Note that we do not perform
extensive testing with the bridge, so we do not formally support it, and I
can not guarantee results. However, I can at least issue a select
statement and get results back.

I haven't worked with Java in at least 2 years (and am extremely rusty),
and I have almost zero experience with the JDBC-ODBC bridge. However, I
did pick up a book by Gregory D. Speegle called JDBC Practical Guide for
Java Programmers which gives information on how to do this.

Adapting an example from that book, I came up with some code that displays
the DATE_TIME and MESSAGE field from the ACTLOG table:

TSMConnect.java:

import java.sql.*;

public class TSMConnect
{
public Connection connect()
   throws SQLException
{
try
{
Class.forName(sun.jdbc.odbc.JdbcOdbcDriver);
}
catch (ClassNotFoundException e)
{
throw new SQLException(Unable to load JdbcOdbcDriver class);
}

// arguments are jdbc:odbc:yourdsn, youradmin, yourpw
return DriverManager.getConnection(jdbc:odbc:amr_odbc,
   raibeck,
   tsm0dbc);
}

public void close(Connection dbc, Statement stmt)
{
try
{
if (stmt != null)
stmt.close();
if (dbc != null)
dbc.close();
}
catch (SQLException sqlex) {}
}

public static void main(String args[])
{
TSMConnect TC = new TSMConnect();
Connection dbc = null;
Statement stmt = null;
try
{
dbc = TC.connect();
System.out.println(Connection opened.);
stmt = dbc.createStatement();
System.out.println(Created a statement.);
}
catch (SQLException sqlex)
{
System.out.println(sqlex.getMessage());
}
finally
{
TC.close(dbc, stmt);
System.out.println(Connection closed.);
}
}
}



TSM.java:

import java.sql.*;

public class TSM extends TSMConnect
{
public static void main(String args[])
{
if (args.length != 0)
{
System.out.println(Usage: java TSM);
System.exit(1);
}

String query = SELECT * FROM ACTLOG;

TSM tsmObj = new TSM();
Connection dbc = null;
Statement stmt = null;
ResultSet resultSet = null;

try
{
dbc = tsmObj.connect();
stmt = dbc.createStatement();
resultSet = stmt.executeQuery(query);
tsmObj.presentResultSet(resultSet);
}
catch (SQLException sqlex)
{
System.out.println(sqlex.getMessage());
}
finally
{
tsmObj.close(dbc, stmt);
}
}

public void presentResultSet(ResultSet rs)
   throws SQLException
{
if (!rs.next())
System.out.println(No records to display);
else
{
do
{
System.out.println(rs.getString(DATE_TIME) + :  +
rs.getString(MESSAGE));
}
while (rs.next());
}
}
}


Note that you need to put your DSN, admin ID, and admin password in the
TSMConnect.java file.

To build the code, run

   javac TSM.java

To run the code

   java TSM

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy

DB2 hot backups

2002-05-10 Thread Gerald Wichmann

Do all DB2 versions allow for hot backups to Tivoli? Or do you need
enterprise edition or something? Specifically I'm curious about DB2 for
workgroups on linux..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: comments on disk storage plan

2002-05-09 Thread Gerald Wichmann

With only 6 nodes pushing data to the TSM server at a time, what kind of
throughput are you expecting the TSM server to have to handle (is the
network going to limit throughput? How much data do you expect the nodes
themselves to push?) I.e. do you really think with that many disks in your
disk pool you're going to see some discernable difference between jfs and
raw logical volumes? You'll see terrific performance regardless of which
route you go but you need to leverage the advantage of having so many
disks...

Keep in mind how TSM uses disk storage pool volumes. In your below scenario
you suggest having 32 disk pool volumes and yet you also mention only appox
6 nodes will be backing up at once. Assuming only 1 thread per node, you'll
only be accessing 6 of those 32 disk volumes at any one time. The other's
will sit idle while each node essentially backs up to only 1 disk at a time.
You wouldn't be leveraging the advantage of having so many disks..

Using striping or even RAID-5 makes much more sense. E.g. you could do as
Rich mentioned below. Or perhaps 6 RAID-5 arrays of 5 disks each with 2 hot
spares? Certainly makes admining easier since you'll never lose data. You're
going to have to take all these variables into consideration. How is TSM
going to hit those volumes you create? How is AIX going to write to the
disks given the configuration you're considering? How are the SSA adapters
and pathways going to handle the given configuration?

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c

-Original Message-
From: Andrew Carlson [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 09, 2002 11:39 AM
To: [EMAIL PROTECTED]
Subject: Re: comments on disk storage plan

I haven't seen the same thing as you. I was using raw logical volumes,
and converted to JFS.  I saw little if no penalty during write, but a
big win during the sequential reads.  As always, your mileage may vary,
but I would not rule out JFS if I was you.


Andy Carlson|\  _,,,---,,_
Senior Technical Specialist   ZZZzz /,`.-'`'-.  ;-;;,_
BJC Health Care|,4-  ) )-,_. ,\ (  `'-'
St. Louis, Missouri   '---''(_/--'  `-'\_)
Cat Pics: http://andyc.dyndns.org/animal.html


On Thu, 9 May 2002, Rich Brohl wrote:

 Chris,

 I would create a stripped logical volume  using as many disk as possible
 (the more the better) in the Volume Group.  Also, do not create a
 filesystem just leave the Logical Volume in the Volume Group.  This will
 keep you from having to deal with the AIX JFS overhead.  When you define
 your volumes to the disk storage pool point it to the (r)logical volume
 name.  For instance if you create a logical volume in the volume group
 named tsmstg1lv, then in TSM define you volume /dev/rstmstg1lv  to the
disk
 storage pool. This will give you some awesome throughput.

 On the flip side when you do your migration to tape you will be doing
 mostly sequential reads from the disk storage pools so the can keep the
 tape drive(s)  running at max write speed.

 Rich Brohl
 ISM Support
 Tie Line  547-9317
 Direct  303-693-4969
 Pager 888-524-9030


 chris rees [EMAIL PROTECTED]@VM.MARIST.EDU on 05/09/2002 08:46:02
 AM

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]


 To:[EMAIL PROTECTED]
 cc:
 Subject:comments on disk storage plan


 Hi all,

 Just wanted to sanity check my disk storage pool layout I'm planning.

 AIX 4.3.3
 Server H80
 2 x D40 SSA drawers, two loops over two SSA 160 adapters
 32 x 9Gb SSA disks
 Max nodes backing up at any one time is approx 6

 I don't want to mirror the volumes

 32 x 9Gb logical volumes, each logical volume on one SSA disk. These lvs
 will then translate into 32 disk storage pool volumes. This way if I have
a
 disk failure I only lose one disk pool volume.

 Anyone got any ideas what kind of read/write performance I should expect
to
 see from this config?

 Any other options considered.

 Regards

 Chris



 _
 Get your FREE download of MSN Explorer at
http://explorer.msn.com/intl.asp.




dsmc via cmdline

2002-05-09 Thread Gerald Wichmann

If I have a client that is not in passwordaccess generate mode, how do I run
dsmc from the command line and specify a login and password similar to doing
it with dsmadmc:

dsmadmc -id=admin -pa=admin q sess

I'd like to do the same but with dsmc

dsmc ? ? q sess

if I remember correctly this can be done but I'm having trouble finding it
in the redbooks..

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



Re: dsmc via cmdline

2002-05-09 Thread Gerald Wichmann

Nevermind I figured it out...I was putting the -pas before the q sess and
it didn't like that..

[root@sc-s1-172-1 /root]# dsmc q sess -pas=blah
Tivoli Storage Manager
Command Line Backup Client Interface - Version 4, Release 2, Level 1.0
(C) Copyright IBM Corporation, 1990, 2001, All Rights Reserved.

Node Name: SC-S1-172-1
Session established with server SERVER1: Solaris 7/8
  Server Version 4, Release 2, Level 1.0
  Server date/time: 05/10/2002 23:13:05  Last access: 05/10/2002 22:11:35

TSM Server Connection Information

Server Name.: SERVER1
Server Type.: Solaris 7/8
Server Version..: Ver. 4, Rel. 2, Lev. 1.0
Last Access Date: 05/10/2002 22:11:35
Delete Backup Files.: No
Delete Archive Files: Yes

Node Name...: SC-S1-172-1
User Name...: root

[root@sc-s1-172-1 /root]#

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Malbrough, Demetrius [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 09, 2002 2:27 PM
To: [EMAIL PROTECTED]
Subject: Re: dsmc via cmdline

Gerald,

Are you speaking of having separate stanzas in the dsm.sys file with
different SErvername options?

dsmc -se=SERVER1

Regards,

Demetrius

-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 09, 2002 3:53 PM
To: [EMAIL PROTECTED]
Subject: dsmc via cmdline


If I have a client that is not in passwordaccess generate mode, how do I run
dsmc from the command line and specify a login and password similar to doing
it with dsmadmc:

dsmadmc -id=admin -pa=admin q sess

I'd like to do the same but with dsmc

dsmc ? ? q sess

if I remember correctly this can be done but I'm having trouble finding it
in the redbooks..

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



Re: dsmc via cmdline

2002-05-09 Thread Gerald Wichmann

No..  normally when you first install the client on linux, even with
passwordaccess generate you must initially run dsmc and perform a query or
something such that it asks for your login and password. Once done, it saves
the password in an encrypted file and won't ask you again in the future.

I'm trying to automate the install and start the dsmc process without having
to have some user start it once to set the password. Isn't there a way to
run dsmc and specify the password from the commandline e.g. if you have a
ksh/perl script that calls dsmc in it such that you can specify the password
as an argument to perform an operation?  E.g. I want a script to do a dsmc
I but I also don't want my client to be in passwordaccess generate mode.

With dsmadmc you can specify a login and password as argument (e.g. dsmadmc
-id=admin -pa=admin 'q stg').. I want to do something similar with dsmc..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Malbrough, Demetrius [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 09, 2002 2:27 PM
To: [EMAIL PROTECTED]
Subject: Re: dsmc via cmdline

Gerald,

Are you speaking of having separate stanzas in the dsm.sys file with
different SErvername options?

dsmc -se=SERVER1

Regards,

Demetrius

-Original Message-
From: Gerald Wichmann [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 09, 2002 3:53 PM
To: [EMAIL PROTECTED]
Subject: dsmc via cmdline


If I have a client that is not in passwordaccess generate mode, how do I run
dsmc from the command line and specify a login and password similar to doing
it with dsmadmc:

dsmadmc -id=admin -pa=admin q sess

I'd like to do the same but with dsmc

dsmc ? ? q sess

if I remember correctly this can be done but I'm having trouble finding it
in the redbooks..

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



Re: Problem with admin account?

2002-05-09 Thread Gerald Wichmann

You need to use GRANT AUTHORITY on the admin account after you create it to
give it whatever priv's you want it to have.. note the true admin account's
details compared to your own below (no system privilege):

tsm: SERVER1q admin admin f=d

Administrator Name: ADMIN
 Last Access Date/Time: 05/10/02   23:51:18
Days Since Last Access: 1
Password Set Date/Time: 04/19/02   02:51:41
   Days Since Password Set: 21
 Invalid Sign-on Count: 0
   Locked?: No
   Contact:
  System Privilege: Yes
  Policy Privilege: ** Included with system privilege **
 Storage Privilege: ** Included with system privilege **
 Analyst Privilege: ** Included with system privilege **
Operator Privilege: ** Included with system privilege **
   Client Access Privilege: ** Included with system privilege **
Client Owner Privilege: ** Included with system privilege **
Registration Date/Time: 04/19/02   02:51:41
 Registering Administrator: SERVER_CONSOLE
  Managing profile:
Password Expiration Period:


tsm: SERVER1

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Julie Xu [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 08, 2002 11:27 PM
To: [EMAIL PROTECTED]
Subject: Problem with admin account?

Dear all,

I have setup a new admin account as:
adsm q admin julie f=d

   Administrator Name: JULIE
Last Access Date/Time: 05/09/2002 15:53:04
   Days Since Last Access: 1
   Password Set Date/Time: 10/12/1998 11:58:17
  Days Since Password Set: 1,305
Invalid Sign-on Count: 0
  Locked?: No
  Contact: julie xu
 System Privilege:
 Policy Privilege:
Storage Privilege: BACKUPTAPE
Analyst Privilege:
   Operator Privilege: Yes
   Registration Date/Time: 10/12/1998 11:58:17
Registering Administrator: ADMINISTRATOR
 Managing profile:

I thought it will allow me to do checki/checko for storage backuptape. But,
it is not? Am I did something
wrong?

Any comments will be appreciated

Thanks in advance


Julie Xu

Unix/Network Administrator
Information Technology Directorate
University of Westen Sydney, Campbelltown
Campbelltown NSW 2560

Phone: 61 02 4620-3098
Mobile: 0416 179 868
Email: [EMAIL PROTECTED]



Re: TSM 5.1 - RH 7.2 Linux client

2002-05-06 Thread Gerald Wichmann

Nope. All other platforms the api isn't necessary. I agree with your
customer. You could verify it by installing regardless of the dependency
(--nodeps flag). I'm guessing Tivoli made a mistake including the api as a
dependency. Unless someone from Tivoli wants to clarify why in this case
it's suddenly necessary..

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c

-Original Message-
From: John Bremer [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 06, 2002 7:31 AM
To: [EMAIL PROTECTED]
Subject: TSM 5.1 - RH 7.2 Linux client

Greetings,

First impression feedback from one of my customers  ... requiring the
installation of the API to get the baseline client to install is
wrong.  It's likely that I will never, ever, need anything else from the
API package.

[root@kodiak ~]# rpm -Fvh ~jgd/TIVsm-BA.i386.rpm
error: failed dependencies:
 libApiDS.so is needed by TIVsm-BA-5.1.0-1

Can anyone out there supply a reason for this dependency?

Thank you.  John Bremer



Re: dsmc scheduler problems

2002-05-06 Thread Gerald Wichmann

Hmm are both dsmc processes listening on the same port? It's been a while
since I've done 2 dsmc processes but I believe one has to run on a different
port from the other. Or alternatively perhaps run in polling mode instead of
prompted.

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c

-Original Message-
From: Eduardo Martinez [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 06, 2002 9:06 AM
To: [EMAIL PROTECTED]
Subject: dsmc scheduler problems

Hello *SMers.

I have the followin problem:

I have 2 TSM Servers (TSM1 and TSM2) and they are on different
networks, and Im trying to backup one AIX box on this 2 servers.
I have somthing like this.

++
|TSM1|
||
++   +-|NIC | 12.7.7.7
|AIX |   |  ||
||   +---+---+  ++
|NIC |--|Router |
++   +---+---+   ++
 |   |TSM2|
10.1.1.1 +--|NIC | 11.5.5.5
 ||
 ++

I run 2 dsmc processes on AIX:

dsmc sched
dmsc sched -se=TSM2

When I first run them, the schedule everything works fine for both
backups, but the next day (I have scheduled daily backups), only the
backup for TSM1 works, and the backup for TSM2 is missed.
I have to kill the dsmc sched -se=TSM2 process and run it again and
it starts backing up, but, again, it fails for the next scheduled
operation.

Does anyone know what could be happening here?
By the way, schedule on TSM1 begins at 7:00PM and on TSM2 at 1:00PM

Thanks in advance.


=
Do or Do Not, there is no try
-Yoda. The Empire Strikes Back

___
Do You Yahoo!?
Yahoo! Messenger
Comunicacisn instantanea gratis con tu gente.
http://messenger.yahoo.es



postschedul command

2002-05-06 Thread Gerald Wichmann

Is it possible to run a post schedule command only if the backup was
successful?

e.g. I want it to backup some files in a directory, then erase the files in
that directory - but only if the backup was successful. If it wasn't, I'd
rather it leave the files there so it can back them up the next time around.

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



preschedulecmd

2002-05-06 Thread Gerald Wichmann

If your preschedule command is a script that returns a 0 on failure, will
TSM still start the backup? My understanding is no but I'm looking at the
admin guide to verify and it doesn't mention return codes. It just says
it'll run your command before it does the backup.

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



open file

2002-05-06 Thread Gerald Wichmann

How do you simulate an open file on linux? I've tried creating a simple
script and then in a 2nd ssh session, loading it in via so it's open but
tsm comes right along and backs it up no prob.. I've also tried making an
executable script that just loops infinitely and it backs that up no problem
too.

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



schedule prompter

2002-05-06 Thread Gerald Wichmann

My clients all run a preschedule command that can take 15 minutes or even
longer to run. During this time, every 30 seconds, the TSM server tries to
contact each client as follows:

ANR2561I Schedule prompter contacting SC-S1-172-2 (session 281) to start a
scheduled operation.

With lots of clients, this really floods my activity log. Is there a way to
increase the 30 seconds to a longer value? I'm not having much luck finding
anything.

Alternatively I suppose I could put the clients in prompted mode..

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



Re: compress setup question

2002-05-03 Thread Gerald Wichmann

Adsm 3.1.2? ouch.. that's old and not even supported anymore..

Anyways I started with 3.7 so other then that descrepency...

1. There shouldn't be a problem although you should be aware that it will
increase the load on all your servers as they'll now have to also compress
the files when doing backups. I've seen this become an issue on servers that
already experience a significant load.
2. The only time I've seen a no space on server type error in regards to
compression is when you have caching enabled on your disk pools. So caching
+ compression would cause problems. But each without the other worked great.

3. again the only problem is that all clients now will experience an
increased load due to having to compress the data. For most this isn't an
issue but some environments may have a client or two that under normal
operations experience quite a load already.

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c

-Original Message-
From: Julie Xu [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 02, 2002 8:31 PM
To: [EMAIL PROTECTED]
Subject: compress setup question

Dear adsmer,

we have setup our adsm server version 3.1.2 for all the client to compress
depend no client.
Compression: Client's Choice

Our client do not want the choice and they want me to setup compress at
server end.

Before I do it, I would like to get advice about:
1. For WinNT servers, the force compress will cause problem at time of
backup/restory?
2. The reason they do not like to make decision about compress is that some
machine is ok to compress;
some will get error as no space on server. Can it be avoided by setup
compress on server?
3. What is the petiential problem related adsm server setup force compress.

Regards

Julie

Julie Xu

Unix/Network Administrator
Information Technology Directorate
University of Westen Sydney, Campbelltown
Campbelltown NSW 2560

Phone: 61 02 4620-3098
Mobile: 0416 179 868
Email: [EMAIL PROTECTED]



tape media

2002-05-02 Thread Gerald Wichmann

2 questions..

1.  Is it possible to recover the data from a tape without the TSM
server. I.e. I give someone the tape with some data on it but do not give
them a copy of the TSM DB. My understanding is no but I just wanted to
confirm there was no 3rd party method of doing so or some expensive ibm
service that might be able to do it in a worse case scenario.
2.  Even if #1 isn't possible, I assume it's still possible to at least
read the tape bit by bit. While it may not be possible to reconstruct files
from the data on there, if the data is textual (text files, emails, etc) one
could possibly read the data there by displaying the bits in ASCii form.
Correct? Specifically I'm speculating on what level of security there is in
an individual tape in terms of what someone could do with it if it has
sensitive data on it.

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



TSM 4.2 differences

2002-05-02 Thread Gerald Wichmann

Does anyone have the TSM 4.2 differences powerpoint presentation on what
changed from 4.1 to 4.2? Or could point me in the proper place to look that
up. Thanks

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



dsadmc -consolemode

2002-05-02 Thread Gerald Wichmann

Normally running dsmadmc -consolemode doesn't display any date/time stamp
with each message. Is it possible to make it do so such as what gets
displayed when you do a q act? I don't see anything in the guide so as far
as I can tell no..

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



number of clients poll

2002-05-02 Thread Gerald Wichmann

I'm curious how big some of the TSM servers are out there in terms of how
many clients your TSM server services (backs up) daily and what kind of
network configuration you're using (gigE, etherchannel, etc). e.g I've seen
many environments with 100-150 clients going across 100Mbps or etherchannel
configurations. But in the TSM world how big is *BIG*?

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



Re: Recovery Log almost 100%

2002-05-01 Thread Gerald Wichmann

For starters most people will define spacetriggers to automatically expand
their DB and recovery log before it reaches 100% (do a help define
spacetrigger on dsmadmc cmdline). What this does is let you set a threshold
(say 80%) at which a process gets kicked off to create a new volume for your
DB or recovery log. Then TSM automatically does an extend on your DB or
recovery log and voila, TSM has more space to work with. I've always thought
this was kind of a patch personally because it insinuates you have unused
filesystem space sitting someplace for TSM to create volumes in. Why not
just make your recovery log that large in the first place if you have the
space?

Secondly, if your database is in roll forward mode, performing a DB backup
will reset the recovery log to 0 utilization. There is a mechanism you can
implement to cause a DB backup to kick off anytime your recovery log reaches
a certain threshold. This is done by setting a database backup trigger and
is for when your database is in roll-forward mode (you didn't specify
whether it was or wasn't. See 'help dbbackuptrigger' or look up automating
database backups in the admin guide). You need to make sure your threshold
isn't so high that the recovery log fills up before the DB backup completes
(i.e. 98% threshold is unlikely to work).

Now despite all this the recovery log still has a size limit.. 13GB.. and
TSM only automatically expands it to 12GB.. furthermore you may have your
recovery log on a filesystem that's even smaller in size and TSM can't
automatically expand a filesystem, it can only create volumes on it. so it's
still possible to hit 100% one way or another. The key here is identifying
why it's occurring despite having the above in place and addressing it. It
shouldn't really happen if you have the above in place and implemented
properly if you think about it.

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c

-Original Message-
From: brian welsh [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 01, 2002 11:24 AM
To: [EMAIL PROTECTED]
Subject: Recovery Log almost 100%

Hello,

AIX 4.3.3 and server 4.1.1.0.

Last night two archive-schedules had a problem. On the clients there were
files in a kind of loop and TSM tried to archive them. Result, recovery log
almost 100%. This was the first time our log is that high. Problem on the
client solved, but now I have the following question.

I was wondering how other people prevent the log from growing to 100%, and
how to handle after the log have reached 100%.

Any tip is welcome.

Brian.


_
MSN Foto's is de makkelijkste manier om je foto's te delen met anderen en af
te drukken: http://photos.msn.nl/Support/WorldWide.aspx



backup stgpool

2002-05-01 Thread Gerald Wichmann

Given an environment where you pretty much have constant backups occurring
to your TSM server 24x7 (say every 2 hours) and thus to your primary pools,
how does this affect copying the data to a copypool?

i.e. if my TSM server is currently accepting backups from one or more
clients and that data is initially going to my diskpool (migrated to tape as
the diskpool fills) - now suddenly I kick off a backup stgpool to a copypool
from that diskpool. With files continuously coming into the diskpool will
that be a problem for my copypool? I understand that perhaps I may miss some
of those incoming files and that's a potential issue but it doesn't concern
me so long as they're caught in my next backup stgpool command. I'm just
thinking out loud what potential problems if any there are with doing a
backup stgpool from a primary pool that is currently receiving files.
Offhand I can't think of any other then hitting the disks a little harder
and maybe not catching some of those incoming files.. also I suppose if your
offsite pool has collocation enabled there might be some interesting
mounting/dismounting happening.

Gerald Wichmann
Sr. Systems Development Engineer
Zantaz, Inc.
925.598.3099 w
408.836.9062 c



  1   2   >