Re: REPAIR OCCUPANCY

2023-04-22 Thread Josh-Daniel Davis
I mean, it's formally in a whitepaper from 2018,
https://www.ibm.com/support/pages/repair-occupancy-repair-reporting-occupancy-storage-pool

and dsmserv was updated in 2021 to allow this to run on a container pool.
https://www.ibm.com/support/pages/apar/IT15373

It's not normal maintenance, but for a server that's been around a long
time, it should be fine to run it.
Just make sure you're not running expire, replicate, client backups, and
ideally not a DBB at the same time.

With friendly Regards,
Josh-Daniel S. Davis



On Tue, Mar 21, 2023 at 8:01 AM Zoltan Forray  wrote:

> Linux Server 8.1.14.200
>
> Looking for feedback on this "undocumented" (i.e. not in a book but readily
> available via Google search) command.
>
> We have been seeing lots of wacky occupancy numbers, issues with nodes
> reporting something in stgpools that are empty, etc.  I stumbled upon
> REPAIR OCCUPANCY and it has corrected a lot of occupancy reporting issues.
>
> So far running it against 3-nodes has reduced "reporting occupancy" by over
> 10TB.  This is disconcerting since we do billing via occupancy numbers and
> this will make all of the numbers questionable!
>
> My question is, how safe is this command?  Does anyone else run REPAIR
> OCCUPANCY and if so, how frequently?
>
> --
> *Zoltan Forray*
> Enterprise Data Protection Administrator
> VMware Systems Administrator
> Enterprise Compute & Storage Platforms Team
> VCU Infrastructure Services
> zfor...@vcu.edu - 804-828-4807
> <
> https://www.credly.com/badges/131d9645-79f0-49ec-9f29-41f15900dca7/public_url
> >
>


Re: How to empty a deduped FILE DEVclass stgpool

2023-04-22 Thread Josh-Daniel Davis
Did you try with both RECONSTRUCT=YES and RECONSTRUCT=NO ?

Sometimes I have had to do that to get stubborn aggregates to move.

Also, what's the reusedelay on the pool?  Sometimes resetting this can help.

Lastly, I had a lot of stuck data that wouldn't go away, and 8.1.17.100
seemed to allow data to be fully purged.  It was cloud containers, which is
a whole different stack, but who knows.  8.1.17.100 was pretty stable for
us other than replication storage rules.

Worst case, DEL VOL DISCARDDATA=YES is always a sad option.

With friendly Regards,
Josh-Daniel S. Davis



On Wed, Mar 8, 2023 at 9:30 AM Zoltan Forray  wrote:

> We have an old Powervault that is having too many hardware issues (5-disks
> replaced in the past 2-weeks) and are trying to retire it.  It was being
> used as a low-activity (nextstgpool after offsite copies have been
> created), deduped FILE DEVclass.
>
> We have emptied it down to a few remaining volumes but can not get rid of
> those last 4-volumes.  We have performed dozens of "move data" (both to a
> different stgpool and same stgpool) "move nodedata", reclaim stgpool, etc
> but we always end up with messages like:
>
> 3/8/2023 10:19:15 AM ANR3246W Process 28246 skipped 2 files on volume
> /powervault_pool_2/00061E69.BFS because the files have been *deleted*.
>
> and nothing being moved.  Tried marking all volumes as READONLY but the
> moves simply recreated the existing volumes with the same unmovable/deleted
> objects.
>
> We have tried with both reconstructaggregates YES and NO.
>
> Occupancy by node shows crazy numbers (currently says this stgpool has a
> total *5.8TB* occupancy but only 4-partially used 120GB volumes remain).
>
> Tried running "restore stgpool preview=yes" with nothing to restore.
> Tried audit volume fix=no - again with nothing to fix!
>
> So what is the magic trick to completely empty this stgpool?
> --
> *Zoltan Forray*
> Enterprise Data Protection Administrator
> VMware Systems Administrator
> Enterprise Compute & Storage Platforms Team
> VCU Infrastructure Services
> zfor...@vcu.edu - 804-828-4807
>


dsmserv 8.1.18 connection latency

2023-04-22 Thread Josh-Daniel Davis
Anyone else running into connection latency on 8.1.18?

We were at 8.1.17.008 and 8.1.17.100, and came to 8.1.18.0 due to some
rollup locking patches (vs going to 8.1.17.015).

Now, we have pretty substantial hangs for client and dsmadmc connections.
During our normal backup window, it gets bad enough that SSL initialization
fails, and we end up with a lot of TDP failures (especially TDPO because of
how many sessions they create).

All the way back at 8.1.6 in 2018, we bumped our maxsessions to 1200
because we'd bump up against 800.  We've been okay since then, and until
the 8.1.18 update.  Now, at around 400 sessions, a new dsmadmc session from
localhost can take 2-3 minutes to prompt for password.

Verified it's not a DNS timeout.  Turned off DNSLOOKUP and no change.

Not a server specific thing.  We have a large number of AIX servers in more
than one datacenter, and we have the same issue on both.  We have the same
issue connecting to localhost.  Servers with fewer clients are better off.

AIX 7.2.5.5, 250GB of RAM, 20x POWER8 cores, full system partition.  CPU
peak is 45-55%, and the system is very responsive.  Peak network is now
around 550MB/sec.  I'm pretty sure we were more than double that before.
Lots of 10gbit ports.  Only 2% RAM free, but no paging activity, and
numclient is still 12% with a good number of LRUable pages.

No ssh latency.  No lag at the db2 layer.  DB is 3.6TB, but it's heavily
fragmented, and probably more like 2.8TB used if we did an offline reorg.
No DBB, expire, reclaim, etc. running during the backup window.  I do see
an automatic runstats on SD_CHUNK_LOCATIONS.

Anyway, we have a case with IBM, and are slogging through different trace
attempts.  It's just been an issue long enough, I was hoping to get some
commiseration if anyone else has noticed anything similar.

With friendly Regards,
Josh-Daniel S. Davis


Re: Files not rebinding as expected

2022-08-03 Thread Josh-Daniel Davis
For windows systems, the filespaces are not stored as C: and D:.  They are
stored as UNC names such as \\hostname\c$.

The drawback here is that if the hostname ever got updated, you could have
C: backed up two or more times.  Q OCC will tell you what to expect.

When you run an incremental, it will not update filespaces which do not
exist at the time of the incremental.  That includes mismatched UNC names.

As to the binding:
* UPD NODE will not change the management class assigned to any backup
objects.
* If an object belongs to a management class that exists in both policy
domains, then EXPIRE INV will work as expected, using the new retention
plan.
* If an object belongs to a management class that does not exist in the
target policy domain's ACTIVE policyset, then EXPIRE will process as
DEFAULT, and the next incremental will rebind the objects to the default
management class.
* If the default management class is missing a copygroup, then the
rendition grace period will be used.
* Directories always are bound to the longest retention management class
unless you use DIRMC to override it.
* If client-side inclexcl is used, that can override server-side depending
on settings.
* If the node does not have the client option set assigned, it will not be
used.
* ARCHIVE data is never rebound.  If the target is missing, then you get
DEFAULT.  If default is missing you get the grace period.
* TOCDEST is used for nas (NDMP) and , snapmirror, and ndmp backups.

If this does not explain it, then it would help to get Q OCC and also a few
records out of the backups table for examples to what you're seeing.

Also, the default line wrap is difficult to read.  Please post -tabdelimit
or data that is in stanza format (one field per line) for readability.
Please do not omit ACTIVE when doing Q CO.  If you did not, then did you
forgot to ACTIVATE POLICYSET?


With friendly Regards,
Josh-Daniel S. Davis



On Wed, Aug 3, 2022 at 3:28 AM Loon, Eric van (ITOP DI) - KLM <
eric-van.l...@klm.com> wrote:

> Hi Martin,
>
> The strange thing is that I also see active files still bound to the
> no-more-existing-in-new-domain management class... That should not be
> possible, I think. We are running full incremental backups only.
> I think I will open a case for it, I'll keep you all posted on the outcome.
>
> Kind regards,
> Eric van Loon
> Air France/KLM Core Infra
>
> -Original Message-
> From: ADSM: Dist Stor Manager  On Behalf Of Martin
> Janosik
> Sent: dinsdag 2 augustus 2022 13:18
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Files not rebinding as expected
>
> Hi Eric,
>
> from my observation updating node to new policy domain is instantaneous no
> matter how many files are stored under the node. There is no way that
> during this instant operation files are re-bound from
> no-more-existing-in-new-domain management class to new domain's default
> management class. That would explains question #3.
>
> Also:
> > if a user changes the binding in an include statement, then instead of
> > running a full incremental backup, only selective or
> > incremental-by-date backups are performed, then only active versions
> > will be rebound, while inactive versions will remain bound to the
> original management class.
>
> If you run anything else than full incremental (in this case `dsmc incr C:
> D:`) then only newly backed up files are stored under new management class
>
> Expire inventory goes through all inactive files and if their MC matches
> with the MC defined in the policy domain, it expires data accordingly.
> If there is no matching MC present, objects will expire based on default
> management class (or grace retention period).
>
> Is the destination by any chance a directory pool? I *think* that deduped
> objects (=those referenced by multiple backup versions) are not necessarily
> re-bound.
>
> Martin Janosik
>
> -Original Message-
> From: ADSM: Dist Stor Manager  On Behalf Of Loon,
> Eric van (ITOP DI) - KLM
> Sent: utorok 2. augusta 2022 12:42
> To: ADSM-L@VM.MARIST.EDU
> Subject: [EXTERNAL] Re: [ADSM-L] Files not rebinding as expected
>
> Hi Skylar,
>
> Yes, this node is backed up every day...
>
> Kind regards,
> Eric van Loon
> Core Infra
>
> -Original Message-
> From: ADSM: Dist Stor Manager  On Behalf Of Skylar
> Thompson
> Sent: maandag 1 augustus 2022 17:43
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Files not rebinding as expected
>
> Hi Eric,
>
> Have you run an incremental backup yet? You'll need to do that in order to
> get active versions rebound to the management classes in the nodes
> include/exclude rules.
>
> Inactive versions will stay bound to their old management class but, in
> the event of a domain change, I think will use either a management class
> with the same name in the new domain/policyset, or the domain default
> management class if that management class doesn't exist. I'm not sure how
> that would be reported with SQL or QUERY BACKUP, though.
>
> On Mon, Aug 01, 2022 at 11:51:08

8.1.13.0 storage rules buggy

2022-07-11 Thread Josh-Daniel Davis
Hey, all.  Just to resurrect this old thread, we ran into all of these same
issues, plus more.


POSSIBLY FIXED #1, My 8.1.13.100 servers do not seem to leave hung stgrule
target processes around.

Also, it looks like 8.1.13.012 has some additional fixes that help this.
012 patches may not are not 100% in 8.1.13.100, which looks to be based off
of 8.1.13.010.

These are the ones in 8.1.13.012 but not in 8.1.13.100:
* IT40506 - REPLICATION JOB TRIGGERED FROM A REPLICATION STORAGE RULE
LASTING BEYOND DEFINED DURATION OR APPEARING TO HANG
* IT40121 - REPLICATION STGRULE MAY ENCOUNTER A HANG WHEN THE
SDREPLTCRPHASE CHECKTHREAD THREAD EXITS EARLY
* IT39715 - COPY TO TAPE USING A STORAGE RULE FAILS WITH ERROR ANR0102E
* IT40973 - ANR1652E REPLICATION FAILED MESSAGE APPEARS AT THE END OF A
SUCCESSFUL REPLICATION STORAGE RULE
* IT40995 - A STGRULE OF AN ACTIONTYPE=COPY WON'T HONOR THE CANCEL PROCESS
COMMAND


WORKAROUND #2, The issue where START STGRULE hangs until resourcetimeout
and aborts is not resolved.  Note that, related, is ongoing monitoring that
does things like QUERY NODES and QUERY BACKUPSET will also hang if the
STGRULE is hung.  By hung, it never shows up in Q PROC, but if you ran it
from DSMADMC, it just sits there never starting.

This one is related to client sessions holding table locks on the nodes
view.  So far, support says to either ensure there is a time slot with zero
client activity, or make an admin schedule/script that runs DISABLE SESS
CLIENT, then does some sort of delay to ensure client sessions are gone,
then runs START STGRULE, then ENABLE SESS.  Once the START STGRULE returns,
the hang risk is not really a problem anymore.

Note that REPLICATE NODE does not have this issue.  It's purely the stgrule
based replication that does this.  Still hoping for adjustments in lock
handling during stgrule start.


FIXED #3, A key fix was IT40338, but really, most of 8.1.13.010 /
8.1.13.100 is based on stgrule fixes. We have a couple issues fixed in
8.1.13.012 for target-side termination hangs, and that's not in
8.1.12.100.  All of the issues were mostly related to busy servers where
Oracle logs are backed up throughout the day for shorter RPO.  This also
improved the issue where external monitoring sessions would hang and
accumulate in Q SES.  5 weeks in, and we have not had client hangs / slow
backups when replication has run or is running.


POSSIBLY FIXED #4, As to the tiering by filespace issue, once we put on
8.1.13.100, our draining pools have begun moving data again.  I still have
a lot of unmigrated data, but TIER STGPOOL counts are incrementing.


For context, the issues mostly seem related to our servers with DBs over
3TB.  This environment is around 4PB after dedupe, 15 ingest servers, plus
an old set of replicas, and a new set of replicas (in transition).

Some of our servers are active all the time, and some are particularly
large.  We ran into expiration issues and chunk deletion issues that left
our DBs pretty large, and fragmented.  Offline reorg takes too long.
70kiops SSD available for the DB and we still struggle with admin jobs.


With friendly Regards,
Josh-Daniel S. Davis

On 1/21/22 04:06:36 -0800 Michael Prix wrote:


Hello Eric,

customers of mine are seeing issues 1 and 2 also after appying 8.1.13 and have
tickets open. Issue 3 with some earlier version, but not presently with 8.1.13.

As for storage rule tiering, we have an interesting problem open, and IBM is,
after weeks, not denying nor approving the fact that there might be a problem.

We want to tier only specific filespaces of some nodes. This should be possible
by applying a notier rule with some tier subrules, but there is no way of
defining a subrule for a filespace, if it isn't a filespace containing a backup
of a VM. In the description of the stgrule definition, there is only one
sentence pointing to this possibility, but until now no confirmation from IBM,
that this might be the source of our problem.


--
Michael Prix

On 1/17/22 10:16 AM, Loon, Eric van (ITOP NS) - KLM wrote:

Hi everybody,

I have recently upgraded my servers to 8.1.13.0 so that I could replace the
(bad performing) protect stgpool and replicate nodes with the new storage rule
replication. I found it to be very buggy. I ran into several very weird issues:

1) When a replication is canceled on the source server, the inbound
replication process on the target server isn't ending, which doesn't allow one
to start a new replication. Every new replication results in the error:
"ANR3875E START STGRULE: A previous replication storage rule is processing on
QVIP6, wait the process to complete". The only way out of this state is
bouncing one of the servers.

2) Replication sometimes hangs without doing anything. A cancel
replication results in the above.

3) I also have been called twice with complaints from customers that
their backups were not running. The

Re: Tru64 BAC

2006-04-25 Thread Josh-Daniel Davis

Mario,
Google and the IBM FTP site both have info.
Here's the last release of the Tru64 client:
ftp://ftp.software.ibm.com/storage/tivoli-storage-management/maintenance/client/v5r1/Tru64UNIX/v517

This client is no longer under development.


From what I understand, you'll still get howto/usage support on it as long

as you're using a supproted level of TSM Server; however, if defects or
compatability problems arise, no further fixes would be provided.

-Josh

On 06.04.05 at 07:26 [EMAIL PROTECTED] wrote:


Date: Wed, 5 Apr 2006 07:26:27 -0700
From: Mario Behring <[EMAIL PROTECTED]>
To: ADSM-L@VM.MARIST.EDU
Subject: Tru64 BAC

Hi list,

Does TSM 5.3.X support any agent for a Tru64 OS running on a Digital machine ? 
I did not found any information about it in the readmes

Thanks.

Mario



Re: Rebinding image snapshots

2006-04-25 Thread Josh-Daniel Davis

Incrementals and Images are not the same type of objects insite TSM.

I don't see anything in the docs that indicate whether rebinding can occur
for image backups.

If rebinding an image snapshot is possible, it will only happen when you
take a new snapshot.


-Josh
On 06.04.05 at 09:44 [EMAIL PROTECTED] wrote:


Date: Wed, 5 Apr 2006 09:44:07 +0100
From: "Large, M (Matthew)" <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Rebinding image snapshots

Hi All,

After some internal TSM reorg I need to rebind some images to another
management class, and after setting these in the options file

  INCLUDE.IMAGE F: KVAULT imagetype=snapshot
  INCLUDE.IMAGE G: KVAULT imagetype=snapshot
  INCLUDE.IMAGE H: KVAULT imagetype=snapshot
  INCLUDE.IMAGE I: KVAULT imagetype=snapshot
  INCLUDE.IMAGE J: KVAULT imagetype=snapshot
  INCLUDE.IMAGE K: KVAULT imagetype=snapshot

To force the g drive below:

  Image Size Stored Size FSType Backup Date Mgmt Class A/I
Image Name
  -- --- -- --- -- ---
--
1  279.38 GB   193.67 GB  NTFS  04/05/2006 00:39:16 KVAULT  A
\\zsts127001\f$
2  351.56 GB   334.44 GB  NTFS  09/30/2005 18:41:04 DEFAULT A
\\zsts127001\g$  <---
3  449.21 GB   432.99 GB  NTFS  09/30/2005 22:39:07 DEFAULT A
\\zsts127001\h$
4  500.00 GB   461.00 GB  NTFS  12/05/2005 23:41:29A
\\zsts127001\i$
5  363.49 GB   350.76 GB  NTFS  03/08/2006 00:42:02A
\\zsts127001\j$
6  664.27 GB   194.27 GB  NTFS  04/05/2006 00:39:33 KVAULT  A
\\zsts127001\k$

To rebind to the KVAULT mgmtclass, I had expected a manual backup of the
drive to force a rebind, but having examined the output above after the
operation I discover nothing has changed. I'm sure I asked the machine
owner to restart the services, but I would have thought the manual
backup process rereads the options file anyway before processing any
files.

Any normal file rebind would have worked this way I'm sure of it.

Can anyone see what's gone wrong, or (in the words of BBC2 star Terry
Wogan) is it me?

Many Thanks,
Matthew

TSM Consultant
ADMIN ITI
Rabobank International
1 Queenhithe, London
EC4V 3RL

0044 207 809 3665


_

This email (including any attachments to it) is confidential, legally 
privileged, subject to copyright and is sent for the personal attention of the 
intended recipient only. If you have received this email in error, please 
advise us immediately and delete it. You are notified that disclosing, copying, 
distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Although we have taken reasonable 
precautions to ensure no viruses are present in this email, we cannot accept 
responsibility for any loss or damage arising from the viruses in this email or 
attachments. We exclude any liability for the content of this email, or for the 
consequences of any actions taken on the basis of the information provided in 
this email or its attachments, unless that information is subsequently 
confirmed in writing. If this email contains an offer, that should be 
considered as an invitation to treat.
_



Re: 5.3.2 server

2006-04-25 Thread Josh-Daniel Davis

Matt,
I didn't see further replies to this so I thought I'd add my experience
recently.

I have two servers I've been chasing locks/hangs on, mostly related to
REPAIR STGVOL.

One server would hang up every other day.

I waited for a month to get a patch, which was against 5.3.2.4; however,
in haste, I went to 5.3.3.0 which disables the automatic execution of
REPAIR STGVOL by reclaimation.

That server has been stable, without hangs, for 2 weeks.

Now, when things are slow/idle, I can manually run REPAIR STGVOL. I get
about 100 volumes per hour processed. Eventually it will abort due to some
other process holding a lock. When I restart it, it picks up where it left
off. I only have 1000 volumes left to go.

My other server is still at 5.3.2.3. I will skip to 5.3.3 and manually get
the REPAIRs done during known idle times.

I have the eFix for 5.3.2.4. It prevents REPAIR STGVOL from running when
other processes are running.

If you really need the fix, then TSM support could get you to L2 who can
hook you up with the eFix, but I would recommend 5.3.3 and manual runs
during idle times.

Once you're caught up, you really shouldn't have to worry about it for a
while since it's repairing orphans created at 5.1.5 through 5.1.7.1 during
multithreaded diskpool operations.


-Josh

On 06.04.03 at 11:38 [EMAIL PROTECTED] wrote:


Date: Mon, 3 Apr 2006 11:38:57 -0400
From: Matthew Glanville <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: 5.3.2 server

I'm still having 5.3.2.3 repair volume hanging problems when it repairs
offsite volumes during reclaimation.

Matthew Glanville
Eastman Kodak Company
Worldwide Information Systems (WWIS)
343 State Street
Rochester NY 14650
585-477-9371

Privacy/Confidential Disclaimer:
The information contained in this e-mail is private and/or confidential.
It is intended only for the use of the named recipient(s).  If you have
received this e-mail in error, please immediately inform the sender and
delete the message.  If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution,
or copying of this communication is strictly prohibited.



Re: Schedule start delayed

2006-04-21 Thread Josh-Daniel Davis

Schedule Randomization.
Q STAT and it's in the middle

On 06.04.18 at 09:19 [EMAIL PROTECTED] wrote:


Date: Tue, 18 Apr 2006 09:19:56 -0600
From: Andrew Raibeck <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Schedule start delayed

It would certainly help to see some more specifics. For example, the
dsmsched.log file for one of the clients in question that did not kick off
when expected, just to verify that, for sure, it is using prompted
scheduling; and the activity log from the scheduled start time and on, to
see what the server was doing, as well as other things like the client
dsm.opt file for the aforementioned client, and the client option set
definition for that client.

Server-side randomization has no effect on prompted scheduling.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

IBM Tivoli Storage Manager support web page:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.

"ADSM: Dist Stor Manager"  wrote on 2006-04-17
22:13:06:


I'm trying to remember what it is about a backup schedule that, even

though

scheduled for say 8PM, does not kick off right away, nodes show pending

for

some time. I've been comparing the 5.2 and 5.3 server settings but can't
find any obvious differences, yet the 5.2 server kicks off each schedule
right at the specific time whereas the 5.3 server I'm putting up does

not.




I have a clopt set that has schedmode as prompted, which is the same on

both

servers and scheduling modes is set to any, same on both servers.



What am I missing that I thought I'd taken care of? I'd actually prefer

to

have these guys start right away.



Thanks,



Geoff Gill

TSM Administrator

PeopleSoft Sr. Systems Administrator

SAIC M/S-G1b

(858)826-4062

Email:   [EMAIL PROTECTED]




Re: collocation groups

2006-04-21 Thread Josh-Daniel Davis

Directories are owned by the nodes they belong to.
So, if you're using a DIRMC pool with long retention,
and you implement collocation or colloc groups,
then the directories will be collocated the same as files.


On 06.04.20 at 13:29 [EMAIL PROTECTED] wrote:


Date: Thu, 20 Apr 2006 13:29:04 -0400
From: Allen S. Rout <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: collocation groups


On Wed, 19 Apr 2006 16:50:56 -0700, "Gill, Geoffrey L." <[EMAIL PROTECTED]> 
said:



If 10 nodes are in one group will it only mount one tape even if no other
tape drives are busy?


Yes; one stream per collocgroup, just like you would get one stream
per node. (even if you're collocating by filespace, he grumpily
grumped)



If I have a directory disk pool that is cached will it still go to
one tape after migration or will collocation groups affect that?
What I see leads me to believe the answer is yes. I am seeing
multiple tapes for the directory tape pool with little to no
data. Do folks still use this option?


[no clue]


Why do tapes that are not full show capacity of 381GB yet when they
go full only show about 200GB? Do I need to force compression on
with the drives or something? (LTO2's, 3584 Lib)


The unfull tapes just parrot back to you the Estimated Capacity you
set on the device class.

If you're getting 200G on a LTO2, might you have set the devclass
FORMAT to be something other than DRIVE? It sure sounds like you're
writing uncompressed.



- Allen S. Rout



Re: 3584 library

2006-04-21 Thread Josh-Daniel Davis

This could cause a call-home.


From the front panel, or from the web gui of the library, you should be

able to have it do a library inventory.  It takes about 60 seconds per
frame.

On 06.04.20 at 15:11 [EMAIL PROTECTED] wrote:


Date: Thu, 20 Apr 2006 15:11:21 -0400
From: David E Ehresman <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: 3584 library


I don't think I understand how inventory & audit works. When you do

Audit library 3584lib

checkl=barcode should I see the robotics move and read each and every

tape slot and resysnc with

TSM data base ? how do I get these back in sync


On a 3584, an "audit library 3584lib checkl=barcode" does NOT cause the
robotics to move and read the barcodes.  It causes TSM to compare its
inventory with the 3584 library's inventory which the library has stored
in memory.  So you first need to cause the 3484 to do an audit (openning
& closing the fron door will do this) and then run the tsm audit library
command.

David



Re: Tivoli DB limit

2006-04-21 Thread Josh-Daniel Davis

I've never heard of a 100million file limitation to TSM.

The limits are 13.5GB for the log and 512GB for the TSM DB.

It's not DB2 Lite, rather, more like a port of the 1980s version of DB2
that was part of MVS.

Dave Cannon in a 2003 TSM symposium said they were considering decoupling
the database and using a more current DB2 implementation.  Aparently
they're still "considering" it, but it's unk whether they'll actually do
this or not.  There are technical limitations in modern DB2, specifically
lack of bitvector data type or equiv.

Even so, the DB I'm working with is 105 million files at about 150GB.  The
limit here is CPU and I/O of the box to be able to process that many
objects in daily admin jobs.

For your server, you could look at:
Image Backup
Journalling (if it's Windows)
VirtualMountpoint option (if UNIX)
MEMORYEFFICIENT YES and also RESOURCEUTIL 10.

These last two will tell it to process one dir at a time, but split out
into 5 producers and 5 consumers.  This will get things moving faster in
the beginning, and give it alot of oomph, but at 50mil files on one
client, you're still looking at some serious time for any backup solution.

Another option would be to use an incremental by date.  This is much
faster, as it just compares the mod date of the file to the last backup.
The drawback is that deleted files won't be expired, and management class
rebinding won't occur.  You could still use this daily, then maybe every
10 days use a regular incremental.

-Josh


On 06.04.20 at 16:28 [EMAIL PROTECTED] wrote:


Date: Thu, 20 Apr 2006 16:28:20 -0500
From: Gaurav Marwaha <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Tivoli DB limit

Hi,


Problem:
We have a huge file system with about 30 to 50 million files to be
backed up, the incremental backup does the job, but takes too long to
complete, out of these 50 million files, only 2 or so actually change. So
the scanning runs sometimes into 24 hour window and the next scheduled backup
starts without actually completing the previous one.

I found filelist parameter where you can specify what to backup, we can use
this as we know from the database what files were changed.

Someone tells me that the Tivoli DB can take only 100million objects for
tracking and filist might not be a correct way to do it. He says there is DB2
lite running behind TSM and that has this limit?

In this scenario what is the best approach and is there is a limit at all?
Even in normal incremental operation how does TSM scan the include directory
list, I mean even when it runs normal incremental doesn't that 100 million
limit still exist?

Thank you in advance
Gaurav M



Re: 3584 - determining available clean cycles

2006-04-21 Thread Josh-Daniel Davis

If the library is hooked up to ethernet, you could lynx in and pull the
cleaning info from there.

On 06.04.20 at 18:38 [EMAIL PROTECTED] wrote:


Date: Thu, 20 Apr 2006 18:38:52 -0400
From: Jim Zajkowski <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: 3584 - determining available clean cycles

On Thu, 20 Apr 2006, Richard Rhodes wrote:


Is there a way to tell from AIX the number of cleaning cycles remaining in
a 3584?


We ended up using TSM-managed, ASNEEDED cleaning, so that it could monitor
our tapes.


The web interface shows that there are 10 tapes, each with 50 cycles
available - total=500.


BTW, in the three years we've had our 3584 I don't think our four drives
have needed to be cleaned more than five times each...

--Jim



Re: 3584 - determining available clean cycles

2006-04-21 Thread Josh-Daniel Davis

for the newer 1/2" tapes (LTO included), cleaning frequency is pretty
rare, except in cases where you have genuinely dirty tapes.

We have 36 drives, and they clean about once every year or two.
I've forced cleaning #2 on several drives because they got I/O errors and
out vault returns tapes with lots of external contaminants (dirt, grass
bits, etc from hauling tubs around in the back of a box-truck, loading
docks, etc).

We have pretty heavy utilization too.


On 06.04.20 at 18:46 [EMAIL PROTECTED] wrote:


Date: Thu, 20 Apr 2006 18:46:59 -0400
From: Richard Rhodes <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: 3584 - determining available clean cycles

Our 3494 lib with 3590H drives get cleaned about twice a day each.  The new
3584
has 3592 drives . . . we're not sure what to expect for cleaning frequency.

Rick





Jim Zajkowski
<[EMAIL PROTECTED]
U> To
Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
Dist Stor  cc
Manager"
<[EMAIL PROTECTED] Subject
.EDU> Re: 3584 - determining available
  clean cycles

04/20/2006 06:38
PM


Please respond to
"ADSM: Dist Stor
Manager"
<[EMAIL PROTECTED]
  .EDU>






On Thu, 20 Apr 2006, Richard Rhodes wrote:


Is there a way to tell from AIX the number of cleaning cycles remaining

in

a 3584?


We ended up using TSM-managed, ASNEEDED cleaning, so that it could monitor
our tapes.


The web interface shows that there are 10 tapes, each with 50 cycles
available - total=500.


BTW, in the three years we've had our 3584 I don't think our four drives
have needed to be cleaned more than five times each...

--Jim



-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.



Re: hidden flags to EXPIRE INV and CLEANUP EXPTABLE

2006-04-21 Thread Josh-Daniel Davis

Correction:
In the SHOW NODE against the subkeys, the KEY is the NODE_NAME.  Field 1
is still the node number and field2 is PLATFORM_NAME.

Also, beware of using SHOW NODE on wrong or random pages.

On 06.04.20 at 22:58 [EMAIL PROTECTED] wrote:


Date: Thu, 20 Apr 2006 22:58:34 -0500
From: Josh Davis <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: hidden flags to EXPIRE INV and CLEANUP EXPTABLE

UNDOCUMENTED OPTIONS FOR EXPIRE INVENTORY:
There are two undocumented/unsupported options for EXPIRE INV;
BEGINNODEID and ENDNODEID.

These accept the decimal node number of a node and can be used to
expire a specific node's filespaces, or a specific range.



WHY WOULD YOU EVER WANT TO USE THESE?
EXPIRE INV won't check for filespace lock before parsing a filespace.

As such, if you're running expiration, and a node is backing up a
filespace when expire inventory gets to it, expire inventory will wait
indefinitely.

When this happens, CANCEL EXPIRATION or CANCEL PROC will register as
"Cancel Pending" but will hang there until the lock is released.

Officially there's supposed to be a resource timeout, but IBM wasn't
able to give details on how long this is.




HOW TO FIND THE NODE NUMBERS:
Node numbers are sequential, starting at 1, and are in REG_TIME order.
Deletions leave gaps.

The short way would be a SELECT statement.  Supposedly this can be
done, but I couldn't figure out the column name.  IBM doesn't like to
give info regarding undocumented/unsupported options since that might
make them liable to support or defend them in the future.

The long way is to use SHOW commands.  Use SHOW OBJDIR to find the
btree node for the Nodes table.  This SHOULD be 38.

SHOW NODE 38 (hopefully) will show the top level of the tree.
On average, there are about 11 second-level leaf nodes per first level
leaf node.

If you do SHOW NODE on each subtree, and save these to a file, you'll
have the raw data for the nodes table.

In the data section, field 1 is the node number in hex, and field 2 is
the node name in all-caps ascii.



OTHER USES FOR THE NODEID:
This can be used with SHOW LOCKS and SHOW THREADS to find out which
node is holding the lock preventing expire inv from continuing.  From
there, you can kill a session so that expire inv can continue or be
cancelled.

This can also be converted to decimal so you can run EXPIRE INV
BEGINNODE=10 ENDNODE=20 or similar to operate only on a specific subset
of nodes.  This could be used to avoid nodes which have long-running
transactions, to quick-expire a huge bunch of data that was just
deleted, or to set up scheduled expirations for heavy-expire nodes.

These same flags work on CLEANUP EXPTABLE.  Since there is no way to
cancel CLEANUP EXPTABLE, then running it on a small subset of nodes can
help if you suspect you're not expiring all that you should be, but
don't want to risk having to shutdown TSM to abort it when you're 50
million objects in and it's a week after you started it.


WHY I'M SHARING THIS INFO:
I've opened a DCR requesting EXPIRE INVENTORY be given an option to
allow detection and skipping of locked filespaces, and that it should
be implemented without killing expiration or the session/process
holding the filespace lock.

The FITS request number is MR0420061821 if you or anyone wants to be
added to the notify/me-too list for this.

If your sales rep doesn't know where/how to get to FITS, it's on
D03DB004.boulder.ibm.com.  I think it's under m_dir (marketing).

This was way longer than I anticipated, but seemed useful enough to
risk sharing.

--
Josh



Re: Reclamation process

2006-04-07 Thread Josh-Daniel Davis

The biggest problem is that TSM doesn't sort reclaimable tapes prior to
assigning them to threads.  Your best efficiency is to reclaim the most
empty tapes first.

You could use something like this:
UPD STG FOOBAR1 RECLAIM=95 RECLAIMPR=5
Wait a while
UPD STG FOOBAR1 RECLAIM=90 RECLAIMPR=5
Wait a while
UPD STG FOOBAR1 RECLAIM=85 RECLAIMPR=5

Technically, 5.3.0.0 also has a RECLAIM STG command; however, there are
defects in it that can cause hangs and other problems at 5.3 base.

-Josh


On 06.04.07 at 10:47 [EMAIL PROTECTED] wrote:


Date: Fri, 7 Apr 2006 10:47:43 -0700
From: Vats.Ashok <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Reclamation process

We are running TSM 5.3.0.0 ser ver on AIX 5.3.0 with 10 LTO drives in a 3584 
library. We have 10 lto storage pools ( sequential) we always have two  prime 
stg tape pools in the library and we rotate 3 offsite tape pools on weekly 
basis. We are always very low on the Scratch tape we run reclamation using TSM 
admin. schedule. Would like to optimize to take full advantage of 10 drives and 
also if TSMers would like to share any  BKM ( Best known methods) for 
reclamation process implementation. Each pool is roughly 100 tapes.

Thanks
Ashok



Re: TSM DB backup question

2006-04-07 Thread Josh-Daniel Davis

Plus, there's reusedelay on a storage pool, which prevents 100% expired
volumes from becoming scratch for X number of days.


On 06.04.07 at 11:44 [EMAIL PROTECTED] wrote:


Date: Fri, 7 Apr 2006 11:44:20 -0500
From: Rajesh Oak <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM DB backup question

The data on the offsite volumes do not physically change during reclamation. 
The pointers to the data change in the database which is then backed up as a 
new db backup.
There are a number of reasons to keep older database backups buts another 
discussion.

Rajesh


- Original Message -
From: "Lawrence Clark" <[EMAIL PROTECTED]>
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM DB backup question
Date: Thu, 6 Apr 2006 14:31:21 -0400


Is the DB backup of any use after the next db backup is taken. The data
on the volumes changes, and data is expired and tapes reclaimed...so
what is the value of older db backups?

We back up the db to disk and ftp a copy over to a remote site.

Our backup data is kept in one tape library and the copypool in a
separate library, both at separate sites. So if our main site has a
fire, the backup data is available from the separate site. If that
backup site has a fire, the copypool in the main site can be used to
recreate the backup storage pools.




[EMAIL PROTECTED] 04/06/2006 2:05:53 PM >>>

For our local sites we vault daily, i.e we send copypool tapes to the
vault daily. For our remote sites since we do not any local vault
person, we have a contract with a company and the copypool tapes are
sent to the vault once a week.
The DR tapes should be sent evry day if possible else in case of a real
disaster you would lose over a weeks data if you send DR tapes once a
week.

Rajesh


- Original Message -
From: "Vats.Ashok" <[EMAIL PROTECTED]>
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM DB backup question
Date: Wed, 5 Apr 2006 14:25:06 -0700


hi Rajesh,
WHat do you mean by not vault daily ? We send DR tapes once a week.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf

Of

Rajesh Oak
Sent: Wednesday, April 05, 2006 2:20 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM DB backup question


Ashok,
For our remote sites where we have a small library and do not vault
everyday, we backup the database to disk. You have to be a little
careful here because with your 161 DB database you would need a lot
of diskspace to keep the db backup for a number of days.
The device class has to be file and you have to make the size big
enough when you define it.

Let me know if you have any issues.

Rajesh


- Original Message -
From: "Remeta, Mark" <[EMAIL PROTECTED]>
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM DB backup question
Date: Wed, 5 Apr 2006 14:05:28 -0400


The other factor to consider is that you should have the delay

period for

volume reuse on your tape storage pools set equal to the length of

time you

keep your database backups. Depending on how often you perform

reclamation

of those pools, these could add up too!

Mark


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]

Behalf Of

Prather, Wanda
Sent: Wednesday, April 05, 2006 1:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM DB backup question


The factor to decide:

If you somehow had a TSM data base crash, or corruption in the TSM

data

base, how far would you be willing to restore your data base

backwards

in time?

If you think you would never use that 30 day old TSM DB backup,

you

don't need to keep it.
I think a DB retention of 5-10 days is more common.

And no, you can't put the TSM DB backups in a storage pool.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On

Behalf Of

Vats.Ashok
Sent: Wednesday, April 05, 2006 1:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM DB backup question


Hi List,
We have tsm 5.3.00 on AIX. We run TSM database backup which runs

once /

every day  consumes media from scratch pool. We keep these backups

for

at least 30 days. consuming almost 30 *200 GB tape media. Our data

base

is about 161 GB so my question is can we reduce the retention time

of

TSM db backup to 7 days. I am not sure what are the factors which
decides how long we need to keep these volumes around. Also Does

anybody

have a storage pool defined for TSM DB backups. is it possible to

do

that ?

Thanks,
Ashok

Confidentiality Note: The information transmitted is intended only

for the

person or entity to whom or which it is addressed and may contain
confidential and/or privileged material. Any review,

retransmission,

dissemination or other use of this information by persons or

entities other

than the intended recipient is prohibited. If you receive this in

error,

please delete this material immediately.
Please be advised that someone other than the intended recipients,

including

a third-party in the Seligman organ

Re: CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE

2006-04-07 Thread Josh-Daniel Davis
CLEANUP EXPTABLE cleans up the expiring.objects table (node 78). This 
helps with orphans and other objects which were supposed to be marked as 
expired, but never were, and therefore are skipped by EXPIRE INV.


On a 7026-M80, 4x SSA-160 serving the DB & Log, 4x 750MHz RS64-IV, and 100 
million files in occupancy, runtime would have been 7.5 days.


During the run, it sets the Expiration Running flag to true, which 
prevents EXPIRE INV from running, so we were chewing up tapes.



On 06.04.07 at 06:29 [EMAIL PROTECTED] wrote:


Date: Fri, 7 Apr 2006 06:29:54 -0700
From: Andrew Carlson <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE

What do these commands do?

--- Josh-Daniel Davis <[EMAIL PROTECTED]> wrote:


Does anyone know how to tell how big the expiration table is?

The reason is that I ran CLEANUP EXPTABLE on Monday.
On one of my servers, it finished up almost immediately.
On the other, it's been running for almost 3 days.


Because of this, when I try to run EXPIRE INV, I get:

tsm: SERVER>expire inv
ANS8001I Return code 4.


tsm: SERVER>q act begint=-00:01
04/06/06 14:43:34 ANR2017I Administrator OPERATOR issued command:
EXPIRE
INVENTORY (SESSION: 239372)
04/06/06 14:43:34 ANR4298I Expiration thread already processing -
unable
to begin another expiration process. (SESSION: 239372)
04/06/06 14:43:34 ANR2017I Administrator OPERATOR issued command:
ROLLBACK
(SESSION: 239372)


It doesn't show up in Q PROC, and tracing IM* and more only shows
failure
to obtain the lock.


I know it's running because of SHOW THREAD and Q ACT.


SHOW THREAD will shows this:

Thread 129: ImVerifyExpTabThread
  tid=33076, ktid=2588793, ptid=0, det=1, zomb=0, join=0, result=0,
sess=0
   Awaiting cond waitP->waiting (0x18d5ffe20), using mutex TMV->mutex

(0x111b091f8), in tmLock (0x100041a08)
   Stack trace:
 0x09382554 _cond_wait_global
 0x09382f64 _cond_wait
 0x09383a2c pthread_cond_wait
 0x0001d91c pkWaitCondition
 0x000100041a10 tmLock
 0x00010016cc5c ImLockFsId
 0x00010016cafc ImLockFileSpace
 0x00010067f168 LockFilespace
 0x000100681710 ImVerifyExpTabThread
 0x0001e9dc StartThread
 0x0936c50c _pthread_body



Q ACT has been showing many many of these message pairs:

04/06/06 13:19:04 CLEANUP EXPTABLE:   resetting 'hasactive',
objId=0:574058714  (SESSION: 134418)

04/06/06 13:19:04 CLEANUP EXPTABLE:   'HasActive' flag set
incorrectly
for objId=0:574877220 (\ADSM.SYS\ÿþÿÿÿþCLUSTERDBÿþÿÿÿþ),
nodeName=NODENAME, fsName=\\nodename\c$ÿþÿÿÿþ  (SESSION: 134418)

There are 125000 lines of this in the actlog in the last 67 hours.

The object IDs are not in order, so I have no idea how much longer
it's
going to run.

I'm assuming it will parse the entire Expiring.Objects table.


SHOW OBJDIR
... Expiring.Objects(78)


SHOW NODE 78
It's a b-tree root node with 99 subnodes.
It's 4 levels deep, and each node has a different number of children.
MaxCapacity is 1004, so potentially 1004^4.
I manually traversing the tree isn't feasible.


SHOW TREE Expiring.Objects
This just hangs for a long time.


I'm leaving it running, redirected to an outfile, but it's been 10
mins
for both the node that completed the CLEANUP quickly and the node
that
didn't.

When expiration was OK, it would take 6-10 hours with SKIPD=YES and
up to
a day and a half with SKIPD=NO, vs 4 hours on the "good" node.

There's no CANCEL CLEANUP or similar.

I'm hesitant to kill off the server and restart it simply because of
the
number of objects it's correcting.

So, should I just wait for the SHOW TREE to complete, or is there
some
other, faster and more simple way to see?


Thanks for any assistance.

-Josh-Daniel Davis



Andy Carlson
---
Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License: $8.95/month,
The feeling of seeing the red box with the item you want in it:Priceless.


Re: CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE

2006-04-06 Thread Josh-Daniel Davis

OK, I finally figured it out.
It's processing by NODES.REG_TIME, then by FSID.
I have 75 of 479 nodes left.
I'll look in my occupancy extracts to see how much more there is.


On 06.04.06 at 16:55 [EMAIL PROTECTED] wrote:


Date: Thu, 6 Apr 2006 16:55:50 -0500
From: Josh-Daniel Davis <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE

Turns out, I can't wait for SHOW TREE.
I gave it an hour, but it held a heavy lock (I didn't pull SHOW LOCK)
and it prevented some simple Q commands, specifically, Q FI.

I thought maybe the SHOW IMV output might help, but I don't know what HWM
means:
  Last Object Id : 0 580917567
   HWM Object Id : 0 580918273
HWM Compression Object Id : 0 577239041
...


I guess it's feasible that we have 13 million objects;
however, the actlog doesn't say anything about the OK objects, just the
fixed ones.

So, again, any clue as to how to find out how long to expect this to run
would be helpful.

-Josh

On 06.04.06 at 15:03 [EMAIL PROTECTED] wrote:


Date: Thu, 6 Apr 2006 15:03:01 -0500
From: Josh-Daniel Davis <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE

Does anyone know how to tell how big the expiration table is?

The reason is that I ran CLEANUP EXPTABLE on Monday.
On one of my servers, it finished up almost immediately.
On the other, it's been running for almost 3 days.


Because of this, when I try to run EXPIRE INV, I get:

tsm: SERVER>expire inv
ANS8001I Return code 4.


tsm: SERVER>q act begint=-00:01
04/06/06 14:43:34 ANR2017I Administrator OPERATOR issued command: EXPIRE
  INVENTORY (SESSION: 239372)
04/06/06 14:43:34 ANR4298I Expiration thread already processing - unable
  to begin another expiration process. (SESSION: 239372)
04/06/06 14:43:34 ANR2017I Administrator OPERATOR issued command: ROLLBACK
  (SESSION: 239372)


It doesn't show up in Q PROC, and tracing IM* and more only shows failure
to
obtain the lock.


I know it's running because of SHOW THREAD and Q ACT.


SHOW THREAD will shows this:

Thread 129: ImVerifyExpTabThread
tid=33076, ktid=2588793, ptid=0, det=1, zomb=0, join=0, result=0, sess=0
 Awaiting cond waitP->waiting (0x18d5ffe20), using mutex TMV->mutex
(0x111b091f8), in tmLock (0x100041a08)
 Stack trace:
   0x09382554 _cond_wait_global
   0x09382f64 _cond_wait
   0x09383a2c pthread_cond_wait
   0x0001d91c pkWaitCondition
   0x000100041a10 tmLock
   0x00010016cc5c ImLockFsId
   0x00010016cafc ImLockFileSpace
   0x00010067f168 LockFilespace
   0x000100681710 ImVerifyExpTabThread
   0x0001e9dc StartThread
   0x0936c50c _pthread_body



Q ACT has been showing many many of these message pairs:

04/06/06 13:19:04 CLEANUP EXPTABLE:   resetting 'hasactive',
objId=0:574058714  (SESSION: 134418)

04/06/06 13:19:04 CLEANUP EXPTABLE:   'HasActive' flag set incorrectly
for objId=0:574877220 (\ADSM.SYS\CLUSTERDB), nodeName=NODENAME,
fsName=\\nodename\c$  (SESSION: 134418)

There are 125000 lines of this in the actlog in the last 67 hours.

The object IDs are not in order, so I have no idea how much longer it's
going
to run.

I'm assuming it will parse the entire Expiring.Objects table.


SHOW OBJDIR
... Expiring.Objects(78)


SHOW NODE 78
It's a b-tree root node with 99 subnodes.
It's 4 levels deep, and each node has a different number of children.
MaxCapacity is 1004, so potentially 1004^4.
I manually traversing the tree isn't feasible.


SHOW TREE Expiring.Objects
This just hangs for a long time.


I'm leaving it running, redirected to an outfile, but it's been 10 mins for
both the node that completed the CLEANUP quickly and the node that didn't.

When expiration was OK, it would take 6-10 hours with SKIPD=YES and up to a
day and a half with SKIPD=NO, vs 4 hours on the "good" node.

There's no CANCEL CLEANUP or similar.

I'm hesitant to kill off the server and restart it simply because of the
number of objects it's correcting.

So, should I just wait for the SHOW TREE to complete, or is there some
other,
faster and more simple way to see?


Thanks for any assistance.

-Josh-Daniel Davis




Re: CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE

2006-04-06 Thread Josh-Daniel Davis

Turns out, I can't wait for SHOW TREE.
I gave it an hour, but it held a heavy lock (I didn't pull SHOW LOCK)
and it prevented some simple Q commands, specifically, Q FI.

I thought maybe the SHOW IMV output might help, but I don't know what HWM
means:
   Last Object Id : 0 580917567
HWM Object Id : 0 580918273
HWM Compression Object Id : 0 577239041
...


I guess it's feasible that we have 13 million objects;
however, the actlog doesn't say anything about the OK objects, just the
fixed ones.

So, again, any clue as to how to find out how long to expect this to run
would be helpful.

-Josh

On 06.04.06 at 15:03 [EMAIL PROTECTED] wrote:


Date: Thu, 6 Apr 2006 15:03:01 -0500
From: Josh-Daniel Davis <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE

Does anyone know how to tell how big the expiration table is?

The reason is that I ran CLEANUP EXPTABLE on Monday.
On one of my servers, it finished up almost immediately.
On the other, it's been running for almost 3 days.


Because of this, when I try to run EXPIRE INV, I get:

tsm: SERVER>expire inv
ANS8001I Return code 4.


tsm: SERVER>q act begint=-00:01
04/06/06 14:43:34 ANR2017I Administrator OPERATOR issued command: EXPIRE
  INVENTORY (SESSION: 239372)
04/06/06 14:43:34 ANR4298I Expiration thread already processing - unable
  to begin another expiration process. (SESSION: 239372)
04/06/06 14:43:34 ANR2017I Administrator OPERATOR issued command: ROLLBACK
  (SESSION: 239372)


It doesn't show up in Q PROC, and tracing IM* and more only shows failure to
obtain the lock.


I know it's running because of SHOW THREAD and Q ACT.


SHOW THREAD will shows this:

Thread 129: ImVerifyExpTabThread
tid=33076, ktid=2588793, ptid=0, det=1, zomb=0, join=0, result=0, sess=0
 Awaiting cond waitP->waiting (0x18d5ffe20), using mutex TMV->mutex
(0x111b091f8), in tmLock (0x100041a08)
 Stack trace:
   0x09382554 _cond_wait_global
   0x09382f64 _cond_wait
   0x09383a2c pthread_cond_wait
   0x0001d91c pkWaitCondition
   0x000100041a10 tmLock
   0x00010016cc5c ImLockFsId
   0x00010016cafc ImLockFileSpace
   0x00010067f168 LockFilespace
   0x000100681710 ImVerifyExpTabThread
   0x0001e9dc StartThread
   0x0936c50c _pthread_body



Q ACT has been showing many many of these message pairs:

04/06/06 13:19:04 CLEANUP EXPTABLE:   resetting 'hasactive',
objId=0:574058714  (SESSION: 134418)

04/06/06 13:19:04 CLEANUP EXPTABLE:   'HasActive' flag set incorrectly
for objId=0:574877220 (\ADSM.SYS\CLUSTERDB), nodeName=NODENAME,
fsName=\\nodename\c$  (SESSION: 134418)

There are 125000 lines of this in the actlog in the last 67 hours.

The object IDs are not in order, so I have no idea how much longer it's going
to run.

I'm assuming it will parse the entire Expiring.Objects table.


SHOW OBJDIR
... Expiring.Objects(78)


SHOW NODE 78
It's a b-tree root node with 99 subnodes.
It's 4 levels deep, and each node has a different number of children.
MaxCapacity is 1004, so potentially 1004^4.
I manually traversing the tree isn't feasible.


SHOW TREE Expiring.Objects
This just hangs for a long time.


I'm leaving it running, redirected to an outfile, but it's been 10 mins for
both the node that completed the CLEANUP quickly and the node that didn't.

When expiration was OK, it would take 6-10 hours with SKIPD=YES and up to a
day and a half with SKIPD=NO, vs 4 hours on the "good" node.

There's no CANCEL CLEANUP or similar.

I'm hesitant to kill off the server and restart it simply because of the
number of objects it's correcting.

So, should I just wait for the SHOW TREE to complete, or is there some other,
faster and more simple way to see?


Thanks for any assistance.

-Josh-Daniel Davis


Re: DSM.OPT file

2006-04-06 Thread Josh-Daniel Davis

If it's by hostname, then it should be vaguely round-robin, if your DNS
has multiple address records.  All connections for the cient would use the
IP chosen by that clients' resolver libraries.

You could also try using etherchannel.  If set up correctly, you would
simply have two NICs with the same IP.  If you used src/dest port hashing,
you could get a rough semblance of load balancing across them.  This would
require a switch that supports etherchannel, and an OS or device drivers
that supports it.

Another option would be to upgrade to gigabit if you're 100mbit, or to
10gigabit if you're already gigabit.

Finally, you could just run 2 networks, and the second network could be
for a small number of large clients, and the first would serve everyone
else.

On 06.04.06 at 15:50 [EMAIL PROTECTED] wrote:


Date: Thu, 6 Apr 2006 15:50:41 -0400
From: "Tyree, David" <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: DSM.OPT file

   I've got a question about the usage of the
"TCPSERVERADDRESS" line in the dsm.opt file on the clients.

   Currently I have the IP address of the TSM server listed in
the dsm.opt file on the clients. We will be implementing a new
addressing scheme for all of our servers in a couple of months and that
includes the TSM server itself.

   In our case the DNS name of the server happens to match the
name of the TSM server instance. Since our TSM server only has one NIC
and thus only one IP address I went ahead and changed the line to show
the DNS name of the TSM server instead of the IP on a few of the
clients. The DNS name of the TSM server will not be changing, only the
IP address. I wasn't sure at the time if the TSM client would work with
the DNS name instead but I since found in the TSM docs it would. And
found that it actually did work just fine.

   The next issue is that we are thinking about adding another
NIC in the TSM at some point in the future to help split the load on the
network. At that point the TSM server will then have two IP's. Any
clients that I want to have come into the TSM server on the second NIC
would have to have the IP of the second NIC in its dsm.opt file. That
part makes complete sense.

   But what about any clients that I have with the DNS name of
the TSM server in the dsm.opt file instead of the IP address? Which NIC
would they connect to? If I absolutely wanted to ensure that the clients
came into the TSM server on the right NIC I would make sure I had the
right IP listed in that clients options file. But what would happen if I
left the DNS entry in the options file?





David Tyree
Enterprise Backup Administrator
South Georgia Medical Center
229.333.1155

Confidential Notice:  This e-mail message, including any attachments, is
for the sole use of the intended recipient(s) and may contain
confidential and privileged information.  Any unauthorized review, use,
disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all
copies of the original message.





CLEANUP EXPTABLE / SHOW VERIFYEXPTABLE

2006-04-06 Thread Josh-Daniel Davis

Does anyone know how to tell how big the expiration table is?

The reason is that I ran CLEANUP EXPTABLE on Monday.
On one of my servers, it finished up almost immediately.
On the other, it's been running for almost 3 days.


Because of this, when I try to run EXPIRE INV, I get:

tsm: SERVER>expire inv
ANS8001I Return code 4.


tsm: SERVER>q act begint=-00:01
04/06/06 14:43:34 ANR2017I Administrator OPERATOR issued command: EXPIRE
   INVENTORY (SESSION: 239372)
04/06/06 14:43:34 ANR4298I Expiration thread already processing - unable
   to begin another expiration process. (SESSION: 239372)
04/06/06 14:43:34 ANR2017I Administrator OPERATOR issued command: ROLLBACK
   (SESSION: 239372)


It doesn't show up in Q PROC, and tracing IM* and more only shows failure 
to obtain the lock.



I know it's running because of SHOW THREAD and Q ACT.


SHOW THREAD will shows this:

Thread 129: ImVerifyExpTabThread
 tid=33076, ktid=2588793, ptid=0, det=1, zomb=0, join=0, result=0, sess=0
  Awaiting cond waitP->waiting (0x18d5ffe20), using mutex TMV->mutex 
(0x111b091f8), in tmLock (0x100041a08)

  Stack trace:
0x09382554 _cond_wait_global
0x09382f64 _cond_wait
0x09383a2c pthread_cond_wait
0x0001d91c pkWaitCondition
0x000100041a10 tmLock
0x00010016cc5c ImLockFsId
0x00010016cafc ImLockFileSpace
0x00010067f168 LockFilespace
0x000100681710 ImVerifyExpTabThread
0x0001e9dc StartThread
0x0936c50c _pthread_body



Q ACT has been showing many many of these message pairs:

04/06/06 13:19:04 CLEANUP EXPTABLE:   resetting 'hasactive', 
objId=0:574058714  (SESSION: 134418)


04/06/06 13:19:04 CLEANUP EXPTABLE:   'HasActive' flag set incorrectly 
for objId=0:574877220 (\ADSM.SYS\ÿþÿÿÿþCLUSTERDBÿþÿÿÿþ), 
nodeName=NODENAME, fsName=\\nodename\c$ÿþÿÿÿþ  (SESSION: 134418)


There are 125000 lines of this in the actlog in the last 67 hours.

The object IDs are not in order, so I have no idea how much longer it's 
going to run.


I'm assuming it will parse the entire Expiring.Objects table.


SHOW OBJDIR
... Expiring.Objects(78)


SHOW NODE 78
It's a b-tree root node with 99 subnodes.
It's 4 levels deep, and each node has a different number of children.
MaxCapacity is 1004, so potentially 1004^4.
I manually traversing the tree isn't feasible.


SHOW TREE Expiring.Objects
This just hangs for a long time.


I'm leaving it running, redirected to an outfile, but it's been 10 mins 
for both the node that completed the CLEANUP quickly and the node that 
didn't.


When expiration was OK, it would take 6-10 hours with SKIPD=YES and up to 
a day and a half with SKIPD=NO, vs 4 hours on the "good" node.


There's no CANCEL CLEANUP or similar.

I'm hesitant to kill off the server and restart it simply because of the 
number of objects it's correcting.


So, should I just wait for the SHOW TREE to complete, or is there some 
other, faster and more simple way to see?



Thanks for any assistance.

-Josh-Daniel Davis

Re: finding compression % stats on TDP clients

2006-03-10 Thread Josh-Daniel Davis

Oops, I completely disregarded "TDP".

If you can get the admin to grant you SQL authority (ANALYST I think), you
can select from SUMMARY which will show start/end times plus bytes
received.  Then you could divide that by the bytes sent to get your
compression ratio.

There's nowhere to get network rate if the client doesn't report it.

-Josh

On 06.03.10 at 15:54 [EMAIL PROTECTED] wrote:


Date: Fri, 10 Mar 2006 15:54:41 -0500
From: "Schaub, Steve" <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: finding compression % stats on TDP clients

TSM serv 5.2.2.0 & 5.2.4.2
TDP on SQL & Exchange 5.2.1.0

I have compression enabled on all my TDP client, and want to know how
much I am getting, for daily reports.  The TDP message gives the
pre-compressed size, but not the post-compressed or percent compressed.
Is this in another message?  I dont have access to the accounting log
file (I only manage the clients and a limited part of the TSM Server).

While we're at it, I would like to know the network throughput as well.
Basically, I want everything for my TDP clients that I get for the BA
clients.


03/10/06 12:08:26 ANE4991I (Session: 213809, Node: MUIR_DS)  TDP
MSExchg
  ACN3516 Data Protection for Exchange: Backup of
server
  MUIR is complete.   Total storage groups backed
up: 1
  Total bytes transferred: 58408413428   Elapsed
processing
  time: 4096.74 Secs   Throughput rate: 13923.15
Kb/Sec
  (SESSION: 213809)



Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)
***public***

Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm



Re: finding compression % stats on TDP clients

2006-03-10 Thread Josh-Daniel Davis

If you have access to the server, you should be able to pull
Q ACT BEGINT= ENDT= BEGIND= ENDD= SEARCH=

The message numbers to search on start with ANE49xxI where xx is one of
these:
03/10/06   18:00:11  ANE4952I (Session: 40949, Node: DEADBEEF)  Total
   number of objects inspected:3 (SESSION: 40949)

03/10/06   18:00:11  ANE4953I (Session: 40949, Node: DEADBEEF)  Total
   number of objects archived: 3 (SESSION: 40949)

03/10/06   18:00:11  ANE4958I (Session: 40949, Node: DEADBEEF)  Total
   number of objects updated:  0 (SESSION: 40949)

03/10/06   18:00:11  ANE4960I (Session: 40949, Node: DEADBEEF)  Total
   number of objects rebound:  0 (SESSION: 40949)

03/10/06   18:00:11  ANE4957I (Session: 40949, Node: DEADBEEF)  Total
   number of objects deleted:  0 (SESSION: 40949)

03/10/06   18:00:11  ANE4970I (Session: 40949, Node: DEADBEEF)  Total
   number of objects expired:  0 (SESSION: 40949)

03/10/06   18:00:11  ANE4959I (Session: 40949, Node: DEADBEEF)  Total
   number of objects failed:   0 (SESSION: 40949)

03/10/06   18:00:11  ANE4961I (Session: 40949, Node: DEADBEEF)  Total
   number of bytes transferred: 45.13 KB (SESSION: 40949)

03/10/06   18:00:11  ANE4963I (Session: 40949, Node: DEADBEEF)  Data
   transfer time:   0.00 sec (SESSION: 40949)

03/10/06   18:00:11  ANE4966I (Session: 40949, Node: DEADBEEF)
   Network data transfer rate:146,550.95 KB/sec
   (SESSION: 40949)

03/10/06   18:00:11  ANE4967I (Session: 40949, Node: DEADBEEF)
   Aggregate data transfer rate: 22.35 KB/sec
   (SESSION: 40949)

03/10/06   18:00:11  ANE4968I (Session: 40949, Node: DEADBEEF)
   Objects compressed by:   98% (SESSION: 40949)

03/10/06   18:00:11  ANE4964I (Session: 40949, Node: DEADBEEF)
   Elapsed processing time:00:00:02 (SESSION: 40949)

03/10/06   18:00:11  ANR0403I Session 40949 ended for node DEADBEEF
 (SUN SOLARIS). (SESSION: 40949)




On 06.03.10 at 15:54 [EMAIL PROTECTED] wrote:


Date: Fri, 10 Mar 2006 15:54:41 -0500
From: "Schaub, Steve" <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: finding compression % stats on TDP clients

TSM serv 5.2.2.0 & 5.2.4.2
TDP on SQL & Exchange 5.2.1.0

I have compression enabled on all my TDP client, and want to know how
much I am getting, for daily reports.  The TDP message gives the
pre-compressed size, but not the post-compressed or percent compressed.
Is this in another message?  I dont have access to the accounting log
file (I only manage the clients and a limited part of the TSM Server).

While we're at it, I would like to know the network throughput as well.
Basically, I want everything for my TDP clients that I get for the BA
clients.


03/10/06 12:08:26 ANE4991I (Session: 213809, Node: MUIR_DS)  TDP
MSExchg
  ACN3516 Data Protection for Exchange: Backup of
server
  MUIR is complete.   Total storage groups backed
up: 1
  Total bytes transferred: 58408413428   Elapsed
processing
  time: 4096.74 Secs   Throughput rate: 13923.15
Kb/Sec
  (SESSION: 213809)



Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)
***public***

Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm



Re: What table is the "q drive" WWN and Serial number stored in?

2006-03-10 Thread Josh-Daniel Davis

These are pulled by the server during startup and stored in temporary
tables that are inaccessible by SQL commands.

You can get the WWN from SHOW LIBR.

-Josh



On 06.03.10 at 07:56 [EMAIL PROTECTED] wrote:


Date: Fri, 10 Mar 2006 07:56:52 -0800
From: T. Lists <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: What table is the "q drive"  WWN and Serial number stored in?

Q drive gives a WWN and serial number, however if you
just select from the DRIVES table you don't get that.
What table is that information stored in?

tsm: TSM02>q drive * drive01 f=d

   Library Name: 3584LIB
   Drive Name: DRIVE01
   Device Type: LTO
   On-Line: Yes
   Read Formats:
ULTRIUM2C,ULTRIUM2,ULTRIUMC,ULTRIUM
   Write Formats:
ULTRIUM2C,ULTRIUM2,ULTRIUMC,ULTRIUM
   Element: 270
   Drive State: EMPTY
   Allocated to:
*  WWN: 500507630001F012
*  Serial Number: 9110108472
   Last Update by (administrator): STACY
   Last Update Date/Time: 03/09/06   16:16:17
   Cleaning Frequency
(Gigabytes/ASNEEDED/NONE): NONE


tsm: TSM02>select * from drives where
drive_name='DRIVE01'

 LIBRARY_NAME: 3584LIB
   DRIVE_NAME: DRIVE01
  DEVICE_TYPE: LTO
   ONLINE: YES
 READ_FORMATS: ULTRIUM2C,ULTRIU
WRITE_FORMATS: ULTRIUM2C,ULTRIU
  ELEMENT: 270
 ACS_DRIVE_ID:
  DRIVE_STATE: EMPTY
 ALLOCATED_TO:
LAST_UPDATE_BY: STACY
  LAST_UPDATE: 2006-03-09 16:16:17.00
   CLEAN_FREQ:
 DRIVE_SERIAL:



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



Re: Limits to TSM Reporting Tool?

2006-03-10 Thread Josh-Daniel Davis

Are they all to the same server?  I find that operational reporting tends
to be pretty resource intensive on the server.  I've run into lock issues
that required killing sessions to free up.

If you have several TSM servers, you might try disabling specific reports
to see if things are OK on all except certain servers (specifically, ones
with fewer resources available).

-Josh

On 06.03.10 at 10:22 [EMAIL PROTECTED] wrote:


Date: Fri, 10 Mar 2006 10:22:55 -0500
From: Dennis Melburn W IT743 <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Limits to TSM Reporting Tool?

Is there a limit to the number of reports (aka containers) that the
reporting tool can handle?  I've been seeing problems with the reporting
tool not working at all as soon as pass the 60 mark.  Anyone else seeing
this?  Any way to increase this threshold or is this a limitation in the
software?


Mel Dennis



Re: Weekly/Monthly -Backupsets running long

2006-03-10 Thread Josh-Daniel Davis

I would recommend creating a second copygroup and a private storage pool
for that node.  You'd probably want to give it a DB snapshot of its own
also.

If you NEED backupsets, then you definitely need the source pool to be
collocated.  If you have to, it could be by group with all of your other
nodes in one group and this one in its own.  That would at least save
media mount times on recreates.

Your other option is EXPORT NODE using FROMDATE and FSID.  You can't
restore it directly from the client, but it can still be imported to a
stranger tsm server.  Also, trying to do incrementals this way would not
clear out files which had been deleted between exports, so in a recovery,
you'd be left with stray files.

-Josh



On 06.03.10 at 10:34 [EMAIL PROTECTED] wrote:


Date: Fri, 10 Mar 2006 10:34:39 -0500
From: Timothy Hughes <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Weekly/Monthly -Backupsets running long

Mark,

Thanks, These Backupsets are for a Novell Client they are still
currently using ArcServe for there full weekly backup of a POI
Volume on a Novell OS server until we get ours working
correctly.  I believe they do this because this Volume is very
important and holds many many GroupWise user files and they
the Backup Sets for Archival/Restores.

So I assume this means there is no way to shorten these? They
causing tape problems with Oracle clients on the weekend.

Thanks again!

Mark Stapleton wrote:


"ADSM: Dist Stor Manager"  wrote on 03/10/2006
09:05:13 AM:

I have Backup Sets that run extremely long these Backup Sets
backup only 1 file space. It seems the Backup Sets are backing
up the same Data plus the new Data it's like doing a Full backup
every week. Is there a command or setting that I can implement
to ensure that the Backup Set only backs up ONLY the NEW
Data? Or is there another way lessen the time on these
Backup Sets.


By definition, a backupset creates a copy of the most recent version of
*every* active file in a given server (or a filespace within that server).

Sorry.

As a matter of curiosity, why do you create regular backupsets? Do you
ever use them?

--
Mark Stapleton ([EMAIL PROTECTED])
MR Backup and Recovery Management
262.790.3190

--
Electronic Privacy Notice. This e-mail, and any attachments, contains 
information that is, or may be, covered by electronic communications privacy 
laws, and is also confidential and proprietary in nature. If you are not the 
intended recipient, please be advised that you are legally prohibited from 
retaining, using, copying, distributing, or otherwise disclosing this 
information in any manner. Instead, please reply to the sender that you have 
received this communication in error, and then immediately delete it. Thank you 
in advance for your cooperation.
==




Re: Move Node Data from one Policy to another Policy

2006-03-10 Thread Josh-Daniel Davis

I want to clarify.
You said from one "policy" to another?

MOVE NODEDATA moves the data between storage pools only.
This will not affect your retention policy of those files.

To change the policy domain:
UPD NODE DOM=newdomainname

If you're just looking to move the data, but not change policy:
Export and import (not fun)
OR
Create a sequential storage pool of type FILE
OR
Create a new storage pool using device class of your tape library

-Josh

On 06.03.10 at 10:23 [EMAIL PROTECTED] wrote:


Date: Fri, 10 Mar 2006 10:23:48 +0100
From: Christiane Kühn <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Move Node Data  from one Policy  to another Policy

Hallo,
I use a Windows TSM-Server 3.2.2.
I have some policies and some  storage-pools.
My storage-pool are defined as  disk.

My aim is, to move one client with all the backups-data from one policy to 
another policy.


move nodedata sgn_selene fromstgpool=diskpool1 tostgpool=sgn_pc
I get the message:  move Nodedata: storage pool  is not a sequential 
pool


Is there any other way to come to my aim?. I know the way, to allocate the 
client in a new policy and to delete

the old backup-files of the client.

Christiane






Re: TsmManager hardware requirements

2006-03-10 Thread Josh-Daniel Davis

TSM Manager is pretty light weight.  Most of its actual load will be
incurred on the TSM server as queries are made.  I'd think a $250 CompUSA
special would do the trick nicely.

Admin Center's biggest issue is that it runs on the Integrated Service
Console, which is a WebSphere implementation.  We know this as "gratuitous
waste of resources", which still incurs load on the TSM server.

On 06.03.09 at 15:56 [EMAIL PROTECTED] wrote:


Date: Thu, 9 Mar 2006 15:56:36 -0500
From: Thomas Denier <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: TsmManager hardware requirements

We are planning for an upgrade to TSM 5.3, and currently have $5,400.00
budgeted for a server with enough memory and throughput to support the
Administration Center. We are looking into TsmManager as an alternative to
the Administration Center. The vendor's Web site lists the supported
operating systems, but I have so far not found any information on
requirements for memory and processor speed. Is this information available
somewhere?



Re: ANR2997W Causing Transaction Failure

2006-03-10 Thread Josh-Daniel Davis

The delay should have just been a block and not an abort.

The "data transger interrupted" supposedly means there was an error or
abort while trying to write to the storage media.

I'd look in the operating system error logs, and for more context in the
actlog.


On 06.03.09 at 09:28 [EMAIL PROTECTED] wrote:


Date: Thu, 9 Mar 2006 09:28:52 -0600
From: Andrew Carlson <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: ANR2997W Causing Transaction Failure

Has anyone ever seen ANR2997W cause a transacton failure?  Here is a
snippet of my log:

03/08/06   14:39:10  ANR2997W The server log is 80 percent full. The
server
will delay transactions by 3 milliseconds.
(SESSION:
152212, PROCESS: 1586)
03/08/06   14:39:10  ANR0524W Transaction failed for session 143862
for node
UPBCCLND02 (DB2/6000) -  data transfer
interrupted.
(SESSION: 143862)
03/08/06   14:39:10  ANR0514I Session 143862 closed volume 110721.
(SESSION:
143862)


This session is a very long running session and is probably the one that
had the log pinned.  I am running TSM 5.3.2.3 on AIX 5.2.5.  y recovery
log is 12,284MB.  Thanks for any information.



Re: Q STG hangs during reclamation / dsmserv process hung

2006-03-07 Thread Josh-Daniel Davis

ENV: AIX 5.2 ML7+, TSM 5.3.2.3, 7026-M80, 3584 12xL2
-
PROBLEM: Does anyone know of a way to kill a migration from inside TSM
 without marking the destination pool read-only? I'm really
 trying to avoid external processes.
-
ACTION TAKEN: admin scripts previously used RECLAIM STG WAIT=YES
  and MIGRATE STG WAIT=YES.
RESULT: Q STG would hang, Some TSM Server crashes, and various other
lock issues.
-
ACTION TAKEN: I rewrote my admin scripts to use UPD STG commands again.

RESULT: The painfully visible lock issues are gone.
Migration just keeps going until it reaches the LO in effect at
the time the process started.
-
Migration is disabled at 5:55am.
At 6:05am, BA STG started.
I get these sorts of messages at least daily:
   2006-03-06 07:55:58.00 ANR0379W A server database deadlock
  situation has been encountered; the lock request for the af bitfile
  root lock, will be denied to resolve the deadlock.
   2006-03-06 07:55:58.00 ANR1181E aftxn.c(230): Data storage
  transaction 0:595528998 was aborted. (PROCESS: 206)
   2006-03-06 07:55:58.00 ANR2183W dfmigr.c(3018): Transaction
  0:595528998 was aborted. (PROCESS: 206)
   2006-03-06 07:55:58.00 ANR1033W Migration process 206 terminated
  for storage pool DISKCOL - transaction aborted. (PROCESS: 206)
-
I guess that technically, this IS a way to terminate migrations, but it's
a little spooky.

-Josh

Related thread Headers:

Date: Fri, 3 Mar 2006 22:59:02 -0600
From: Josh-Daniel Davis <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: dsmserv process hung.


Date: Fri, 3 Mar 2006 14:51:52 -0800
From: Larry Peifer <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: dsmserv process hung.

"Ochs, Duane" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" 
01/30/2006 12:44 PM
Please respond to
"ADSM: Dist Stor Manager" 

_

Date: Wed, 1 Mar 2006 12:10:29 -0500
From: Orville Lantto <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Q STG hangs during reclamation

From: ADSM: Dist Stor Manager on behalf of Rainer Wolf
Sent: Wed 3/1/2006 3:01 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Q STG hangs during reclamation


From: ADSM: Dist Stor Manager on behalf of Prather, Wanda
Sent: Tue 2/28/2006 4:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] 3584 help


Re: TSM 5.3 web gui

2006-03-07 Thread Josh-Daniel Davis

It came from the sites that have 1200 node clusters,
and the sites with 16 Regatta-H and Squadrons-H systems with their
multiple ESS arrays and all of the 1000 different products IBM sells.

Each group of customers has a few loud proponents for 1-4 admins
being able to manage an entire enterprise by themselves.

IBM can't very well say "You're crazy", so instead, the Integrated Service
Console came about.  Ultimately, it's supposed to manage CSM, HMCs, AIX.
SVCs, ESS, and all of the other three letter acronyms of IBM, plus more.

-Josh
On 06.03.06 at 14:25 [EMAIL PROTECTED] wrote:


Date: Mon, 6 Mar 2006 14:25:41 -0500
From: "Prather, Wanda" <[EMAIL PROTECTED]>
Subject: Re: TSM 5.3 web gui

And WHERE did this notion of "one consolidated front end" come from?
Who does it help?  In any site with more than 1 staff person, the
division of labor is that the Storage person uses all the storage
products, not just the Tivoli products;  the Security person uses all
the security products, not just the Tivoli security products, etc.  It
makes sense to drive all the Tivoli STORAGE products from one
(non-websphere) interface, but not "everything".


Example of why SHOW commands are unsupported

2006-03-04 Thread Josh-Daniel Davis

Today, I was reminded why to be careful with SHOW commands during
production workloads and thought I'd share (and archive across the
Internet at large) by posting here:

03/04/06 10:44:46 ANR4391I Expiration processing node NODE, filespace
   /oracle, fsId 141, domain STANDARD, and management class DEFAULT -
   for BACKUP type files. (SESSION: 64389, PROCESS: 907)
03/04/06 10:44:50 ANR2017I Administrator ADMIN issued command: show
   dbreorg (SESSION: 71539)
03/04/06 10:44:50 ANRD pkthread.c(593): ThreadId<66> Run-time
   assertion failed:
"tmCmpTsn( tmGetTsn( txnP->tid ), tmGetTsn( tid ) ) == EQUAL",
Thread 66, File dbtxn.c, Line 331. (SESSION: 71539)
03/04/06 10:44:50 ANRD ThreadId<66> issued message  from:
<-0x00010001c168 outDiagf
<-0x0001000103f0 pkLogicAbort
<-0x00010029cef0 dbParticipate
<-0x0001000b85c8 tbOpen
<-0x00010003ce44 admAuthSystem
<-0x0001007051c8 AdmEstimateDbReorg
<-0x0001007a69f0 AdmShow
<-0x0001001d92d4 AdmCommandLocal
<-0x0001001da680 admCommand
<-0x00010052c294 SmAdminCommandThread
<-0x0001e9dc StartThread
<-0x0936c50c _pthread_body  (SESSION: 71539)
03/04/2006 10:44:50 ANR7838S Server operation terminated.
03/04/2006 10:44:50 ANR7837S Internal error DBTXN6773 detected.
03/04/2006 10:44:50 ANR7833S Server thread 1 terminated in response to
program abort.
03/04/2006 10:44:50 ANR7833S Server thread 2 terminated in response to
program abort.
03/04/2006 10:44:50 ANR7833S Server thread 3 terminated in response to
program abort.
...


Note that immediately preceding this, here were the processes:
tsm: EARNHARDT>q proc

Process  Process Description   Status
 Number
---    -
907  ExpirationExamined 13472440 objects, deleting 316425 backup
   objects, 1659 archive objects, 0 DB backup
   volumes, 0 recovery plan files; 0 errors
   encountered.

917  Migration Disk Storage Pool DISKCOL, Moved Files: 855,
   Moved Bytes: 230,914,351,104, Unreadable Files:
   0, Unreadable Bytes: 0. Current Physical File
   (bytes): 1,369,759,744 Current output volume:
   M00758.

I think this might be related to the defect posted by Orville Lantto a few
days ago:

   IC48429: MIGRATE STGPOOL PREVENTS UPDATE STGPOOL FROM RUNNING AND HANGS
   SUBSEQUENT QUERY VOLUME AND SELECT FROM VOLUMES COMMANDS.

I've found that the hangs are more than just from UPD STG, but also from
other lock intensive processes.

According to Orville, avoiding the new commands for MIGRATE STG and
RECLAIM STG makes the problems go away.

But, since the crash occurred during a SHOW command, which is supposed to
be a developer help/backdoor, there is/would be no support for this crash.

-Josh


Re: What is the FLUSH command?

2006-03-04 Thread Josh-Daniel Davis

It's still there.  Here's what I wrote up about it in 2001:

FLUSH
This causes a database buffer writers to flush, which would be used to
test for buffer-writer starvation.  Use this command when SHOW BUFVARS has
a dpDirty (dirty pages) value in the thousands.  Then check log
utilization to see if it freed things up.


On 06.03.02 at 20:06 [EMAIL PROTECTED] wrote:


Date: Thu, 2 Mar 2006 20:06:33 -0600
From: Bob Booth - UIUC <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: What is the FLUSH command?

It forces a write of used log buffer pool pages to the database.  I don't
know if it even still does what it says it is doing in newer versions of
TSM.  I'm trying to remember, but I think it was used to free up recovery log
space.  With bigger disks being used for recovery logs, it probably isn't
needed, but you don't specify your TSM version or environment.  Somebody was
certainly worried about running out of log space though..

It has always been undocumented, however, it might show up in a readme
somewhere.

bob


[EMAIL PROTECTED] 3/2/2006 4:42:09 PM >>>

Hey -
Have started new contract with company and I'm
trying
to figure out what is going on. Several of the
servers have a scheduled "FLUSH" command that runs
daily. Can't find any documentation on FLUSH - what
is it supposed to be doing? The messages along with
the schedule say something about "buffer pool dirty
pages flushed". Any documentation on this anywhere?

thanks,
T.



__
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com




Re: Multiprocess Offsite Reclamation Pointless???

2006-03-04 Thread Josh-Daniel Davis

I've run into the same issue alot.  It's just a TSM limitation.
TSM is not smart enough to reorder any queue based on tape availability.

IE, if proc 1 is using tape 1,
and proc 2 needs data from tapes 1-5,
TSM won't do 2-5 while waiting for access to 1.
It'll simply go into media wait.

happens all the time, which is generally why admin schedules and such like
to be serialized.

Multiple threads of the same type, well, not much you can do there.

On 06.02.27 at 13:38 [EMAIL PROTECTED] wrote:


Date: Mon, 27 Feb 2006 13:38:57 -0600
From: Mark Stapleton <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Multiprocess Offsite Reclamation Pointless???

"ADSM: Dist Stor Manager"  wrote on 02/27/2006
11:55:42 AM:

Has anyone out there tried the new 5.3 multi-threaded reclamation
process for offsite copypools?

In both cases, within a few minutes of processing, all the reclamation
processes for this pool are competing for the same input volume, so one
process continues and the others stall with media wait.

The documentation doesn't say to not use the parameter for offsite copy
pools, so I'm wondering if this is a bug or working as designed.  If
working as designed, then it is pointless for offsite pools.


Well, not exactly. As is mentioned by others, using this for offsite pool
reclamation with collocated primary tape pools will minimize the tape
volume contention (but may increase the number of tape mounts).

But hey! if you run any three tape processes, and they all need data off
the same volume, there's going to be contention no matter how you slice
it.

--
Mark Stapleton ([EMAIL PROTECTED])
US Bank
MR Backup and Recovery Management
Office 262.790.3190
--
Electronic Privacy Notice. This e-mail, and any attachments, contains 
information that is, or may be, covered by electronic communications privacy 
laws, and is also confidential and proprietary in nature. If you are not the 
intended recipient, please be advised that you are legally prohibited from 
retaining, using, copying, distributing, or otherwise disclosing this 
information in any manner. Instead, please reply to the sender that you have 
received this communication in error, and then immediately delete it. Thank you 
in advance for your cooperation.
==



Re: Backing up a MySQL database

2006-03-04 Thread Josh-Daniel Davis

mysqldump sent through adsmpipe is common and free.

On 06.02.28 at 09:38 [EMAIL PROTECTED] wrote:


Date: Tue, 28 Feb 2006 09:38:36 +1100
From: Paul Ripke <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backing up a MySQL database

On Tuesday, Feb 28, 2006, at 07:56 Australia/Sydney, Dennis Melburn W
IT743 wrote:


Can this be done with the standard TSM 5.3.X client, or do I need a
special app for this?


Note that it also depends what table type is being used in MySQL.
I've found that MyISAM tables back up live OK with TSM Copy
Serialization, although you may find the indexes need recreating.

InnoDB tables are another matter entirely - although I believe
there are commercial products to back these up properly.

Cheers,
--
Paul Ripke
Unix/OpenVMS/TSM/DBA
I love deadlines. I like the whooshing sound they make as they fly by.
-- Douglas Adams



Re: move nodedata

2006-03-04 Thread Josh-Daniel Davis

I apologize.

You'd think I'd read the whole thread before replying.

One option would be:
Create a file class (for speed.  Tape would work)
GENERATE BACKUPSET to this fileclass
This would pull only the active files for that node,
and you could specify to restore from that backupset.


Another would be to find the specific tapes used by this node, then find
the last access times of those volumes, and move those specific volumes.

Script found online to show volumes used for a node:
/*  ---*/
/*  Example:  run vols-node myUnixBox  */
/*  !!! WARNING !!! RESOURCE INTENSIVE !!! */
/*  ---*/
select volumes.volume_name as "Volume",-
volumes.stgpool_name as "StgPool",-
volumes.est_capacity_mb as "Cap.(MB)",-
volumes.pct_utilized as "Utlzd(%)",-
volumes.status as "Status",-
volumeusage.filespace_name as "Filespace" -
from volumes,volumeusage -
where volumes.volume_name=volumeusage.volume_name and -
volumeusage.node_name=upper('$1')


From there, you could Q VOL for last access time to narrow things down.


Another, option would be to use EXPORT NODE to get just active
versions of only certain filespaces, then import it again, making sure
that the copygroup destinations pointed to a disk pool.


-Josh

On 06.03.03 at 23:02 [EMAIL PROTECTED] wrote:


Date: Fri, 3 Mar 2006 23:02:02 -0600 (CST)
From: Josh-Daniel Davis <[EMAIL PROTECTED]>
To: "ADSM: Dist Stor Manager" 
Subject: Re: move nodedata

If you don't know the tape the most current data is on,
and don't feel like pulling the list of tapes (or can't per the length of
query)...

You could also use MOVE NODEDATA, but it would be more than just the last
version.

-Josh

On 06.03.03 at 16:53 [EMAIL PROTECTED] wrote:


Date: Fri, 3 Mar 2006 16:53:45 -0600
From: Andy Huebner <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: move nodedata

If you know what tape it is on you could move that tape back to disk.
This of course has it's own problems.

Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
David Tyree
Sent: Friday, March 03, 2006 3:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] move nodedata

Long story but essentally our exchange server just died.

While the hardware guys are putting together a new server I would like
to
prestage the last exchange backup from tape to our diskpool.
Getting the new server up and runnng is going to take awhile so I figure
I
can save some time by having the restore exchange data sitting on
the diskpool waiting on the hardware guys.

It's alot quicker restoring 90+ gig from the diskpool than from tape.

I looked over the 'move nodedata' command but it looks like it will give
me
ALL of my exchange backups. I don't want 14 days worth of exchange
backups,
just the one from last night. I don't see a way to pull just the last
exchange backup.

Any idea on how to go about doing that?

Thanks


This e-mail (including any attachments) is confidential and may be legally
privileged. If you are not an intended recipient or an authorized
representative of an intended recipient, you are prohibited from using,
copying or distributing the information in this e-mail or its attachments.
If you have received this e-mail in error, please notify the sender
immediately by return e-mail and delete all copies of this message and any
attachments.
Thank you.





Re: move nodedata

2006-03-04 Thread Josh-Daniel Davis

If you don't know the tape the most current data is on,
and don't feel like pulling the list of tapes (or can't per the length
of query)...

You could also use MOVE NODEDATA, but it would be more than just the last
version.

-Josh

On 06.03.03 at 16:53 [EMAIL PROTECTED] wrote:


Date: Fri, 3 Mar 2006 16:53:45 -0600
From: Andy Huebner <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: move nodedata

If you know what tape it is on you could move that tape back to disk.
This of course has it's own problems.

Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
David Tyree
Sent: Friday, March 03, 2006 3:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] move nodedata

Long story but essentally our exchange server just died.

While the hardware guys are putting together a new server I would like
to
prestage the last exchange backup from tape to our diskpool.
Getting the new server up and runnng is going to take awhile so I figure
I
can save some time by having the restore exchange data sitting on
the diskpool waiting on the hardware guys.

It's alot quicker restoring 90+ gig from the diskpool than from tape.

I looked over the 'move nodedata' command but it looks like it will give
me
ALL of my exchange backups. I don't want 14 days worth of exchange
backups,
just the one from last night. I don't see a way to pull just the last
exchange backup.

Any idea on how to go about doing that?

Thanks


This e-mail (including any attachments) is confidential and may be legally 
privileged. If you are not an intended recipient or an authorized 
representative of an intended recipient, you are prohibited from using, copying 
or distributing the information in this e-mail or its attachments. If you have 
received this e-mail in error, please notify the sender immediately by return 
e-mail and delete all copies of this message and any attachments.
Thank you.



Re: dsmserv process hung.

2006-03-03 Thread Josh-Daniel Davis

This happens when 2 threads start to back up the system object, and the
second one starts sending data before the first one is able to create the
group leader, which is the anchor for management and expiration of the
entire system object as a single entity even though it's made of multiple
objects.

As a workaround, you can set resourceutil to 2 on all of your windows
clients, do another backup of the system objects, and expire the old ones
(through policy changes or just by waiting).

The hang is related to the defect involving RESTORE STGVOL.  We had the
same problem; however, the RESTORE STGVOL process never actually made its
way into the process table.  I would initially be able to get in and HALT
dsmserv.  Officially, the defect indicated that if left to its own
devices, the lock condition would degrade to unreachability.

The fix is in 5.3.2.3.

HOWEVER, We upgraded to 5.3.2.3 and have had SERIOUS lock issues.

SHOW DEADLOCK doesn't show anything.  Actlog will periodically show a
swarm of errors about operations failing due to lock issues, similar to:

2006-02-26 13:00:18.00  ANR2033E UPDATE STGPOOL: Command failed -
lock conflict. (SESSION: 124639)
2006-02-26 13:00:18.00  ANR2033E QUERY STGPOOL: Command failed -
lock conflict. (SESSION: 124664)
2006-02-26 13:00:18.00  ANR2033E QUERY DRMEDIA: Command failed -
lock conflict. (SESSION: 124670)

and similar.

ALSO

MIGRATE STG will lock tables in such a way that Q STG will hang, but Q
PROC and Q SES work.  Client sessions will continue writing to whatever
volume they have; however, most new sessions will also hang.  Once the
offending process is killed, everything resumes.

ALSO

I've found that REPAIR STGVOL has been showing up a very often (a
subprocess of RECLAIM STG).

ALSO

Tonight, REPAIR STGVOL, 2 RECLAIM STG and one AUDIT LIC were all running
and had hung.  Unfortunately, I didn't pull dbtxn, txn, lock, etc info
prior to issuing HALT.

ALSO

dsmserv seems to chew up more CPU now than at 5.3.1.6 and 5.3.2.1;
however, I don't have quantitative measurements of the previous levels.

I'm not sure if this progression of locking issues is limited to us or is
a 5.3.2.3 problem; however, I'm very worried about the safety and
stability of TSM.


-Josh

On 06.03.03 at 14:51 [EMAIL PROTECTED] wrote:


Date: Fri, 3 Mar 2006 14:51:52 -0800
From: Larry Peifer <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: dsmserv process hung.

We too have just started to have this problem in the last 4 days.  In our
case the symptoms and solutions seem to fit in with what's described in
IBM Document Ref #: PK00196.  However that was to have been fixed with
5.3.1 release which we are using.  Can anyone shed more light on what
might be triggering this situation?
AIX 5.2 ML5
TSM 5.3.1.0

Here's a series of errors that cropped up this week for the first time.
Any insights would be helpful.

02/27/06   21:59:00  ANRD imgroup.c(1180): ThreadId<90> Error 8
retrieving
 Backup Objects row for object 0.101495737
(SESSION: 2838)
02/27/06   21:59:00  ANRD ThreadId<90> issued message  from:

 <-0x00010001bf74 outDiagf
<-0x0001003fb114
 imIsGroupLeader <-0x000100396b9c
SmNodeSession
 <-0x00010047f854 HandleNodeSession
 <-0x000100485760 smExecuteSession
 <-0x00010051c3e4 SessionThread
<-0x0001e958
 StartThread <-0x09286460 _pthread_body
(SESSION:
 2838)
02/27/06   21:59:00  ANRD smnode.c(7353): ThreadId<90> Session
2838:
 Invalid Group Id 0,101495737 for ADD function
(SESSION:
 2838)
02/27/06   21:59:00  ANRD ThreadId<90> issued message  from:

 <-0x00010001bf74 outDiagf
<-0x000100396bc4
 SmNodeSession <-0x00010047f854
HandleNodeSession
 <-0x000100485760 smExecuteSession
 <-0x00010051c3e4 SessionThread
<-0x0001e958
 StartThread <-0x09286460 _pthread_body
(SESSION:
 2838)
02/28/06   23:24:55  ANRD lmlcaud.c(506): ThreadId<75> Error 17
checking
 filespace data for license audit. (PROCESS: 72)

02/28/06   23:24:55  ANRD ThreadId<75> issued message  from:

 <-0x00010001bf74 outDiagf
<-0x0001006d8e70
 LmLcAuditThread <-0x0001e958 StartThread

 <-0x09286460 _pthread_body  (PROCESS:
72)
03/01/06   11:20:55  ANRD lmlcaud.c(506): ThreadId<43> Error 17
checking
 filespace data for license audit. (PROCESS: 79)

03/01/06   11:20:55  ANRD ThreadId<43> issued messag

Re: ADSM.ORG login/confirmation problems

2006-02-15 Thread Josh-Daniel Davis

Mark,
Thanks.  Yes, I was hoping that maybe the ADSM.ORG folks would notice.

Eventually, one of the messages showed up post-dated, so I can go from
there.

-Josh

On 06.02.15 at 09:31 [EMAIL PROTECTED] wrote:


Date: Wed, 15 Feb 2006 09:31:44 -0600
From: Mark Stapleton <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: ADSM.ORG login/confirmation problems

"ADSM: Dist Stor Manager"  wrote on 02/14/2006
06:45:23 PM:

There is no way to contact/feedback on this without being logged in.

Could someone send feedback indicating that [EMAIL PROTECTED] and
[EMAIL PROTECTED] are having this problem?

ADSM.ORG isn't sending me reset passwords or confirmations of new

account

creations.

I've set to multiple different domains.

On yahoo, nothing shows up to inbox or bulk.

To home server, nothing hits the exim4 logs.


Keep in mind that no one on this list (including IBM) has any control over
adsm.org. Who the domain owners are has been a bit of a mystery for years.
This is why many of us use the adsm-l archives run by Marist University
(http://www.mail-archive.com/adsm-l@vm.marist.edu/), who owns the mailing
list.

Good luck.

--
Mark Stapleton ([EMAIL PROTECTED])
MR Backup and Recovery Management
262.790.3190
--
Electronic Privacy Notice. This e-mail, and any attachments, contains 
information that is, or may be, covered by electronic communications privacy 
laws, and is also confidential and proprietary in nature. If you are not the 
intended recipient, please be advised that you are legally prohibited from 
retaining, using, copying, distributing, or otherwise disclosing this 
information in any manner. Instead, please reply to the sender that you have 
received this communication in error, and then immediately delete it. Thank you 
in advance for your cooperation.
==



Re: Expire Inventory

2006-02-14 Thread Josh-Daniel Davis

Depends entirely on your server.  There's no chart based on the overall
load incurred; however, the extra I/O is all actlog.  If your DB
performance has room to spare, then it shouldn't be a problem.

You could always turn it on for a day and compare your expiration
performance the next day with:

select activity, cast((end_time) as date) as "Date", -
  (examined/cast((end_time-start_time) seconds -
  as decimal(18,13))*3600)"Pages backed up/Hr" -
  from summary where activity like '%DB%' and -
  days(end_time) - days(start_time)=0

-Josh

On 06.02.13 at 09:05 [EMAIL PROTECTED] wrote:


Date: Mon, 13 Feb 2006 09:05:16 -0500
From: David E Ehresman <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Expire Inventory


Instead, you may want to run EXPire Inventory with Quiet=No


Does the QUIET=NO vs QUIET=YES on expirations have much of an effect on
the overall speed of expiration?

David



Re: windows backup issue ... (pst)

2006-02-14 Thread Josh-Daniel Davis

If you're also backing up with TDPMSEXC then shouldn't you be excluding
these files?

-josh

On 06.02.13 at 15:56 [EMAIL PROTECTED] wrote:


Date: Mon, 13 Feb 2006 15:56:16 +0100
From: goc <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: windows backup issue ... (pst)

hi,
i'm curious about following, this is a part of "normal" filesystem backup,
the exch is backed up with tdp but we tought that this way would also be
okay :-), so here  it goes ...

10-02-2006 17:05:44 ANS1228E Sending of object
'\\zcibl\p$\VIPData\Users\gsest\MSOutlook\Folderi 100206.pst' failed
10-02-2006 17:05:44 ANS4007E Error processing
'\\zcibl\p$\VIPData\Users\gsest\MSOutlook\Folderi 100206.pst': access to the
object is denied

how do you skip those messages, obvioulsy the file is not backed up ... as
win admin said the file is open, but it's read only (is this correct) so
changing the management class to backup is open, try again , do it on last
try has no effect as fair as i understand ...
maybe open file support ?

please, any clue or idea !
thanks in advance

goran

server 5.3.1.5 on AIX5.2
client win32  5.3.0



Re: Tape Question 3592 and 3590 drives

2006-02-14 Thread Josh-Daniel Davis

Have you seen this?
--

Problem
Procedure to add a new 3592 tape drive on 3494 library

Solution
This can be necessary when 3590E drives are correctly defined on AIX and
TSM as well, however new 3592 tape drives are to be added to the 3494
library. These are configured in the Operating System and on TSM.

Define the devclass for new device with the command:

DEFine DEVclass devclass_name LIBRary=library_name DEVType=3592
FORMAT=drive

Attempting to checkin a new tape using this kind of drive at this point
results in the error:

ANR8847E No 3590-type drives are currently available in library .
ANR8426E CHECKIN LIBVOLUME for volume xx in library  failed.

Looking at the drive status it in an UNKNOWN state, while by mtlib it
works correctly.

To correct this, it is necessary to use separate library partitions when
using 2 different media types within a given library. This is documented
in the README file of 5.1.7.2, specifically involving the 3592 drives when
the support for this new drive model was introduced, as follows:

"However, for 3494 libraries with two device types (any combination of
3490, 3590, and 3592) the following idea has to be followed: one device
type per library object. Thus, for 3592 support, a new library object will
have to be made for these drives."

Please refer to the README file available with the 5.1.7.2 release of the
TSM Server at:

ftp://ftp.software.ibm.com/storage/tivoli-storage-management/patches/server/AIX/5.1.7.2/TSMSRVAIX5172.README.SRV

This information is missing from some newer releases of the README file
and this documentation oversight is being addressed with the APAR IC39118.

In the case of the 3494 library, the two logical libraries will be
pointing to the same device (/dev/lmcp3) for the library changer.

Note that this can be different with other libraries. How different
library manufacturers support logical partitioning may vary.

Modified date: 2004-04-29
---

This was hit #1 of 2 when I searched Google for: ANR8847E 3592
The URL is: http://www-1.ibm.com/support/docview.wss?uid=swg21167566


-Josh


On 06.02.14 at 12:36 [EMAIL PROTECTED] wrote:


Date: Tue, 14 Feb 2006 12:36:28 -0600
From: "Talafous, John" <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Tape Question 3592 and 3590 drives

I recommend a service call to IBM. Just last month I saw an issue with
TSM 5.2.7.0 performing the same activity with the same result. There is
a fixtest at 5.2.7.x addressing the 'No drive available in library'
scenario.

Best of luck!

John G. Talafous
[EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Levi, Ralph
Sent: Tuesday, February 14, 2006 1:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Tape Question 3592 and 3590 drives

Hi All,

Sorry to drag this back up but I don't see a final answer.  I have just
added an additional frame to my library containing 3592 (Jaguar) drives.
I added an additional devclass to support the new drives.  I defined the
new drives and paths.  Both drives and paths are online.  When I try to
format a new (3592) tape I get the message:

ANR2017I Administrator ADMIN issued command: LABEL libv 3494lib A2
devt=3592
ANR0984I Process 55 for LABEL LIBVOLUME started in the BACKGROUND at
12:58:26.
ANR8799I LABEL LIBVOLUME: Operation for library 3494LIB started as
process 55.
ANR8847E No  3592-type drives are currently available in library
3494LIB.
ANR8802E LABEL LIBVOLUME process 55 for library 3494LIB failed.
ANR0985I Process 55 for LABEL LIBVOLUME running in the BACKGROUND
completed with completion state FAILURE at 12:58:26.

Am I missing something ?

Running Tivoli 5.2.7 - AIX 5.3   (have the absolute latest altdd and
atape drivers on).

Thanks,
Ralph

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Rainer Wolf
Sent: Wednesday, July 27, 2005 9:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Tape Question

Hi Debbie,
you can mix the tapes for 3590 and 3592 drives inside the library as you
like, but  only 3592 drives can use the 300GB tapes and the 20/40GB
tapes are only usable by the 3590 magstar drives.
You can also mix 3592 drives and 3590 drives in the library but not in
one frame.
We have upgraded our l12-frame to a l22-frame : this includes a new
os/2-PC and the frames needed for the new 3592 -drives ( maximum 4
Drives in l22
) and placed
the new 3592 Drives in there.
You may move the old 3590 Drives into another D12 -drive- frame: we have
done that.
One thing is that you may not forget the total number of Drives you plan
and the number of serial-ports for the Drives ... if you excceed 8
Drives and currently have 8 Ports available you have to extend the
number of ports too.

Greetings
Rainer

Debbie Bassler wrote:


Thanks for the reference information, RichardWe had an IBM rep

come in

and he said we co

Re: Backupset question

2006-02-14 Thread Josh-Daniel Davis

Other options include:
   *use WAIT=YES and it should dump the relevant actlog messages to
your admin session, including which tapes were mounted
   *specify VOLumes=m1,m2, etc

On 06.02.15 at 07:26 [EMAIL PROTECTED] wrote:


Date: Wed, 15 Feb 2006 07:26:42 +1100
From: Chris Pasztor <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backupset question

I use a  SQL query to match the tape number from volhist and the backupset names
in backupsets like

SELECT
VOLHISTORY.VOLUME_NAME,BACKUPSETS.BACKUPSET_NAME,BACKUPSETS.DATE_TIME,BACKUPSETS.DESCRIPTION

FROM VOLHISTORY,BACKUPSETS WHERE VOLHISTORY.TYPE='BACKUPSET'  AND
VOLHISTORY.DATE_TIME=BACKUPSETS.DATE_TIME";

[EMAIL PROTECTED]





Muthukumar Kannaiyan <[EMAIL PROTECTED]> on 15/02/2006 03:20:12

Please respond to "ADSM: Dist Stor Manager" 



To:   ADSM-L@VM.MARIST.EDU
cc:(bcc: Chris Pasztor/HCS)
Subject:  [ADSM-L] Backupset question



I am trying create backupset from TSM for few nodes. Where do I keep track of
tape serial number for corresponding node's backup ?
Following command I am using


generat backupset node1 node1 * devc=3494dc scr=yes ret=60

Regards
Muthu
202-458-8340 - Work



Re: AW: [ADSM-L] Automating server scripts

2006-02-14 Thread Josh-Daniel Davis
You'll want to use the command "SERIAL" at the end of each of the other 
scripts.  In theory, this is supposed to be implied.


Ignoring the reasons why I'm migrating before backing up,
here's an example of what I use:

TSM:>q scr backmig_STG f=raw
* SCRIPT BACKMIG_STG to backup and migrate storage pools
select SCHEDULE_NAME from ADMIN_SCHEDULES where 
SCHEDULE_NAME='EXITIFEXIST'

IF (RC_OK) exit
PARALLEL
UPD STG DISKPOOL MIGPRO=5 HI=65 LO=5
UPD STG DISKPOOL_NOVLT MIGPRO=3 HI=75 LO=5
migrate stg diskpool_novlt lo=5 WAIT=YES
migrate stg diskpool lo=5 WAIT=YES
SERIAL
select SCHEDULE_NAME from ADMIN_SCHEDULES where 
SCHEDULE_NAME='EXITIFEXIST'

IF (RC_OK) exit
PARALLEL
backup stg diskpool copypool maxpr=1 WAIT=YES
backup stg tapepool copypool maxpr=4 wait=YES
SERIAL
select SCHEDULE_NAME from ADMIN_SCHEDULES where 
SCHEDULE_NAME='EXITIFEXIST'

IF (RC_OK) exit
PARALLEL
backup stg DBARCHPOOL copypool maxpr=4 wait=yes
SERIAL

-Josh-Daniel Davis

On 06.02.14 at 09:05 [EMAIL PROTECTED] wrote:


Date: Tue, 14 Feb 2006 09:05:42 -0500
From: Timothy Hughes <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: AW: [ADSM-L] Automating server scripts

Hello all,

I set up a script to run every hour. with the following commands in the
script. When I run the script I receive a invalid parameter error for
the wait parameter.


/* This script queries the backuppool stgpool then stops migration on the 
backuppool stgpool*/
run test_query_script wait=yes
run stop_disk_mig_script wait=yes
run test_query_script

The wait parameter is invalid can this be used in this sequence?


Thanks for any help!


ANR2020E RUN: Invalid parameter - WAIT.
ANR2020E RUN: Invalid parameter - WAIT.

Storage   DeviceEstimated Pct Pct   High   Low   Next
Pool Name Class Name CapacityUtilMigrMig   Mig   Storage
Pct   Pct   Pool
---   --   --   -   -      ---   ---
ARCHIVEPOOL   DISK   63 G 0.0 0.0 9070   H3592POOL
BACKUPPOOLDISK1,014 G67.667.6 7040   H3592POOL
H3592POOL 3592CLASS 266,231 G25.831.4 9070
MIGPOOL   DISK   84 G12.812.8 7550   H3592POOL
R3592POOL 3592RCLASS270,464 G25.4

ANR1462I RUN: Command script TEST_QUERY_SCRIPT completed successfully.
ANR1462I RUN: Command script GLOBAL_SCRIPT completed successfully.
PAC Brion Arnaud wrote:


Timothy,

You should build an admin schedule, which initiates something like "run Global_script", 
and then in this "global_script", have all of your commands, like:

run test_query_script wait=yes
run stop_disk_migration wait=yes
run test_query_script

Hope this helped !
Cheers.

Arnaud

**
Panalpina Management Ltd., Basle, Switzerland,
CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]
**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Timothy 
Hughes
Sent: Thursday, 09 February, 2006 16:34
To: ADSM-L@VM.MARIST.EDU
Subject: Re: AW: [ADSM-L] Automating server scripts

Hello all,

I ran this via the admin schedule but it just seem to execute the first part of 
the script (run test_query_script)?
do I need a second admin schedule to kick off the next script and so on? If so 
how would the serial part of the command be incorporated into to that? I tried 
wait=yes and the server log showed Invalid parameter - WAIT

thanks

CHECK_STGPOOL
Description TEST SCRIPT COMMAND
Command  run test_query_script serial run stop_disk_migration serial run 
test_query_script Priority 5 Start date 2006-02-08 Start time 08:13:00 Duration 
15 Duration units MINUTES Period 1 Period units HOURS Day of Week ANY 
Expiration - Active? YES Last Update Date/Time 2006-02-09 08:11:40.00 Last 
Update by (administrator) Managing profile - Schedule Style CLASSIC Month - Day 
of Month - Week of Month -

Bill Kelly wrote:


I think that's correct; if you want the scripts to run in parallel, you'll need 
multiple admin schedules.  If you want the scripts to run serially, you could 
kick off the first script via an admin schedule, then have that script run the 
second script, and so on...

-Bill

Bill Kelly
Auburn University OIT
334-844-9917


[EMAIL PROTECTED] 02/08/06 9:04 AM >>>

Multiple scripts inside an admin schedule?
I believe each schedule can contain one cmd/script...
You may have to create multiple schedules, one for each server script.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTE

ADSM.ORG login/confirmation problems

2006-02-14 Thread Josh-Daniel Davis

There is no way to contact/feedback on this without being logged in.

Could someone send feedback indicating that [EMAIL PROTECTED] and
[EMAIL PROTECTED] are having this problem?

ADSM.ORG isn't sending me reset passwords or confirmations of new account
creations.

I've set to multiple different domains.

On yahoo, nothing shows up to inbox or bulk.

To home server, nothing hits the exim4 logs.

Sorry to bother and thanks for your time.

-Josh-Daniel Davis


Re: GIGE connectivity via TSM

2004-03-24 Thread Josh-Daniel Davis
Joe,
TCP/IP is always routed based on the IP addresses used.  If you want
your traffic to go over the gigabit card:

If it's on a machine that will be a server for transaction X, then
specify to your client the host name or IP address of the gigabit card.

If it's on a machine that will be a clinet for transaction X, then
specify an IP or hostname of the server that will naturally route over
the desired network interface.

If you are adding the gigabit ethernet as a second IP on the same
network, this is generally bad form and you won't be able to force
traffic over the card.  I think this usually will round-robin on
outbound packets.

If you are adding the gigabit card to a network that the server is not
also connected to, then you will have to specify routing in order to
move this traffic to the card.  Note that routing configuration has to
be configured on each system involved if it's not via connection routes.

There is no such thing as a default NIC in Solaris.  You have a default
route, which is where any traffic will go that is not otherwise routed
elsewhere.

This applies to all TCP/IP operations and not just TSM.

-Josh

On 04.03.23 at 10:28 [EMAIL PROTECTED] wrote:

> Date: Tue, 23 Mar 2004 10:28:32 -0500
> From: "Wholey, Joseph (IDS DM&DS)" <[EMAIL PROTECTED]>
> Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: GIGE connectivity via TSM
>
> This should be an easy one for most...  I have a Solaris client
> running TSM v5.2.  It will be getting a GIGE card.  What is the
> best/recommended way to ensure data is traversing the GIGE card (in
> both directions... outbound/inbound) if it is not set up as the
> default NIC on the client.  thx.


Re: IBM3584 Tape Drives won't write to Media

2004-03-24 Thread Josh-Daniel Davis
Rajesh,

3A/00 means "media not present".

Is this a new library?  Has it ever worked?  If not, there may be a
drive cabling or definition problem such that the tape is being loaded
into one drive, but TSM is trying to read from a different drive

Is the tape you're trying label in the slot that TSM thinks it is?
The library may maintain it's own inventory which can be refreshed from
tapeutil in the IBMTape package (aka Atape for Windows).

Just ideas.

-Josh

On 04.03.23 at 08:55 [EMAIL PROTECTED] wrote:

> Date: Tue, 23 Mar 2004 08:55:17 -0500
> From: Rajesh Oak <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: IBM3584 Tape Drives won't write to Media
>
> I have an IBM3584 Library that we have divided into 4 Logical
> Libraries and assigned them to 4 TSM Servers.
>
> 3 are AIX Servers and the 4th is a Windows 2000 Server. This is new
> TSM Server that is not in production yet.
>
> The Drives in the 4th TSM Server are attached to the Windows Server
> thru' the SAN Fabric.
>
> For the Windows TSM Server the Library is controlled by the TSM Device
> Driver and the Tape Drives are controlled by Windows Drivers. All the
> drives are defined and can be seen by TSM. But when I try to label and
> checkin Tapes it just sits on the process. After a while it gives an
> I/O error on the drive and the process fails.
>
> 03/23/2004 08:30:49 ANR8302E I/O error on drive DRIVE2
> (\\.\tape1)(OP=TESTREADY, Error Number=21, CC=0, KEY=02, ASC=3A
> ASCQ=00, SENSE=70.00.02.00.00.00.00.1C.00.00.00.00.3A.00.00.00
> .10.13.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.6E.42
> .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00
> .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00
> .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00,
> Description=An undetermined error has occurred).  Refer to
> Appendix D in the 'Messages' manual for recommended action.
> (SESSION: 54, PROCESS: 1)
>
> 03/23/2004 08:30:49 ANR8304E Time out error on drive DRIVE2
> (\\.\tape1) in library IBM3584. (SESSION: 54, PROCESS: 1)
>
> 03/23/2004 08:30:49 ANR8302E I/O error on drive DRIVE2
> (\\.\tape1) (OP=OFFL,Error Number=21, CC=0, KEY=02, ASC=3A,
> ASCQ=00, SENSE=70.00.02.00.00.00.00.1C.00.00.00.00.3A.00.00.
> 00.10.13.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.6E.
> 42.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.
> 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.
> 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00,
> Description=An undetermined error has occurred).  Refer to
> Appendix D in the 'Messages' manual for recommended action.
> (SESSION: 54, PROCESS: 1)
>
> 03/23/2004 08:31:28  ANR8841I Remove volume from slot 769
> of library IBM3584 at your convenience. (SESSION: 54, PROCESS: 1)
>
> Any help on this is appreciated.
>
> Rajesh Oak
> Blue Cross Blue Shield of Michigan
> Tel # 313-225-8086


Re: mksysb tapes on fibre-attached 3590

2004-03-24 Thread Josh-Daniel Davis
Dwight,
You can't stack mksysb's on a tape, but you can stack Sysbacks.

The first tape file on a bootable tape is in 512 byte blocks and is the
actual boot program.  This is basically the kernel and a backup file
used to populate the ramfs as defined by the proto files.

On a mksysb, the second, I think, is a dummy file, and in a sysback,
it's a second backup file for sysback utilities.

On a mkssysb, the third is the actual backbyname of all rootvg
filesystems.  That's all and last time I looked, there is no way to
skip forward to additional images.

On a sysback, the third is the table of contents.  Four and up are the
backbyname images for each of the filesystems included in the sysback.

-Josh

On 04.03.23 at 06:22 [EMAIL PROTECTED] wrote:

> Date: Tue, 23 Mar 2004 06:22:56 -0600
> From: Dwight Cook <[EMAIL PROTECTED]>
> Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: mksysb tapes on fibre-attached 3590
>
>
> *snip*
>
> I would say that would be a waste of a 300 GB tape but if your system
> is down hard and you need to recover...
>
> OH, now I just thought... during a boot from a mksysb tape, does the
> system go through any sort of tape relocation or does it just start at
> the ~load point~ ?
>
> One could place multiple mksysb images on a single tape IF processing just
> initiated where the tape is positioned... between mtlib & tapeutil one
> could mount any tape and position it anywhere so if you had 4 or 5 boxes
> using an atl, they might be able to share a single tape (ugh...  more
> work to research that...)
>
> Dwight E. Cook
> Systems Management Integration Professional, Advanced
> Integrated Storage Management
> TSM Administration
> (918) 925-8045


Re: mksysb tapes on fibre-attached 3590

2004-03-22 Thread Josh-Daniel Davis
I don't know if you can boot fibre tape.  If not, this would most likely
be a firmware limitation.  If any system would allow it, the p650 would.

The boot media, whether it be mksysb-CD, NIM or SCSI tape, will need to
have Atape and the fibre drivers in it.  Make sure you're at current
firmware for the 7038-6M2.

The atldd driver won't work since I don't think it'll start atldd nor
include mtlib in the boot image, etc.

You could manually mount the tapes in the drive via:

A) one drive set up as an autoloader with a specific category of tapes.

B) offline the drive to tsm and use mtlib commands from another system
to mount the tape.  That drive would have to be shared with the other
system in order to issue the eject IOCTL to the drive afterwards.  The
main concern here is to make sure you don't grab the wrong tape.

C) Offline the whole library and move tapes from the front panel.

D) I think TSM has some rudimentary tracking of non-tsm system backup
volumes, but I don't know what state that is in or what the commands are
off-hand.  Look into that to see if it can help.

-Josh

On 04.03.22 at 14:35 [EMAIL PROTECTED] wrote:

> Date: Mon, 22 Mar 2004 14:35:43 -0500
> From: Thomas Denier <[EMAIL PROTECTED]>
> Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: mksysb tapes on fibre-attached 3590
>
> We are considering migrating our TSM system to an AIX system with 3590
> tape drives in a 3494 tape library. The drives would be connected using
> FCP (Fibre Channel Protocol). The host would be a pSeries system. We are
> currently leaning toward a p650.
>
> Given this configuration, would it be possible to restore AIX using
> 3590 mksysb tapes? If so, would it done by booting from tape or by
> booting from CD and telling the software to read a mksysb tape? How
> would we deal with the library? In particular, would we have to use
> the stand-alone tape feature of the library?
>


Re: TDPO for Oracle

2004-03-22 Thread Josh-Daniel Davis
Dale,
Did you check the basics of, as oracle, or your tdpo user:

   # env | grep DSM

Make sure the DSMI variables point to the right locations, then verify
those files are readable by your user.

If after verifying this, you might want to let us know what version of
oracle, tdpo and tsmc you have on this node.

---
Josh Davis


On 04.03.22 at 13:17 [EMAIL PROTECTED] wrote:

> Date: Mon, 22 Mar 2004 13:17:14 -0600
> From: "Jolliff, Dale" <[EMAIL PROTECTED]>
> Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: TDPO for Oracle
>
> I see this question has been asked several times in the list, but I fail
> to see any answers on ADSM.ORG.
>
> I'm getting the
> "ANS0263E Either the dsm.sys file was not found, or the Inclexcl file
> specified in the dsm.sys was not found"
> error when trying to set the password after installing the 64 bit TDPO
> on Solaris 8.
>
> (The 32 bit version installs fine)
>
> Anyone have the fix for this handy?
>
>
>
>
> Dale Jolliff
> Data Administration Team
> Backup and Recovery
>


Re: Advantages / Disadvantage running in 64-bit mode

2004-03-22 Thread Josh-Daniel Davis
AIX 5.2 can be run in 32 and 64-bit mode.

64-bit kernel mode should have better performance on bulk I/O, larger
memory model, etc.

This is definitely worth if if you are using ultra 160 or 320 adapters,
or 64-bit Fibre channel (6228 and 6239) cards.

I would think that for an M80, any LPAR capable system, any 7017 system,
Winterhawk and Nighthawk SP nodes, and similar class machines would
greatly benefit from 64-bit.

If your system only has 32-bit PCI adapters, there wouldn't be much
benefit in 64-bit kernel mode.

---
Josh Davis

On 04.03.22 at 15:41 [EMAIL PROTECTED] wrote:

> Date: Mon, 22 Mar 2004 15:41:58 +0100
> From: "Loon, E.J. van - SPLXM" <[EMAIL PROTECTED]>
> Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: Advantages / Disadvantage running in 64 bit mode
>
> Hi James!
> Please correct me if I'm wrong, but as far as I know, you can run AIX
> 5.2 in 32-bit mode. TSM 5.2.2 has both a 32-bit and a 64-bit mode RTE...
> Kindest regards,
> Eric van Loon
> KLM Royal Dutch Airlines
>
>
> -Original Message-
> From: James Lepre [mailto:[EMAIL PROTECTED]
> Sent: Monday, March 22, 2004 15:17
> To: [EMAIL PROTECTED]
> Subject: Re: Advantages / Disadvantage running in 64 bit mode
>
>
> You must run in 64bit mode, you do not have a choice with aix 5.2