Re: GIGE connectivity via TSM

2004-03-23 Thread Thomas A. La Porte
If it the client GigE -- Server IP is not the default route in
both directions, and they are not on the same subnet, you will
have to configure a static route on each host in order to
guarantee that the traffic will traverse the chosen route in both
directions.

IOW, static routes on both client and server are necessary.

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]

On Tue, 23 Mar 2004, Wholey, Joseph (IDS DMDS) wrote:

Bill,

I hate to beat this... but my situationa is as follows:  I have a TSM server residing 
on Z/OS with an FQDN and one IP address (as far as I know).  My client has multiple 
NICs.  We want to use a
particular GIGE card exclusively for backup/restore and the others for applications, 
so we can't disable the others.
My server's IP is not on the same subnet as my client's GIGE card, so the default 
route would not be over the GIGE.

QUESTIONS:  1.  Do I need to statically route the client GIGE ip address to the 
Server Ip address in the client's routing table?

2.  Does this ensure that data coming from the server to the client 
 will come down that path as well.  (I don't want to fall into your I had a 
 client... scenario.


thx. joe


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Bill Boyer
Sent: Tuesday, March 23, 2004 2:08 PM
To: [EMAIL PROTECTED]
Subject: Re: GIGE connectivity via TSM


If you use the GIGE address of the TSM server in your DSM.OPT file on the
client, then the BACKUP data will go over the GIGE card. Any data FROM the
TSM server TO the client will go out the network adapter on the TSM server
however TCPIP routing takes it. If you have multiple adapters in the TSM
server, the outbound traffic will first go the adapter that is on the same
subnet as the client. If the GIGE and client are not on the same subnet,
then the traffic will go out the default route unless you have a specific
ROUTE in effect. If you want ALL data to go in and out the GIGE card, either
disable the other adapters, or change the default route to be the GIGE
adapter.

I had a client that had in their AIX TSM server  1 10/100 card and 2 GIGE
cards ALL set to the same subnet. Backup inbound data to the TSM server came
in over the GIGE adapter, but since the 10/100 card was the default route,
all outbound traffic, and that included RESTOREs went out the 10/100. They
couldn't figure out why restores too so long.

Bill Boyer
DSS, Inc.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Dwight Cook
Sent: Tuesday, March 23, 2004 1:18 PM
To: [EMAIL PROTECTED]
Subject: Re: GIGE connectivity via TSM


Use a traceroute command from your client box to the specific ip address
you want to go into on your tsm server.
Your backups will take the same path.
That simple...

Dwight E. Cook
Systems Management Integration Professional, Advanced
Integrated Storage Management
TSM Administration
(918) 925-8045




  Wholey, Joseph
  (IDS DMDS) To:
[EMAIL PROTECTED]
  [EMAIL PROTECTED]cc:
  .COMSubject:  Re: GIGE
connectivity via TSM
  Sent by: ADSM:
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  03/23/2004 10:28
  AM
  Please respond to
  ADSM: Dist Stor
  Manager






OK... now things are getting fuzzy.  This was my initial concern.  Can
anyone confirm exactly what one needs to do to ensure one traverses the
GIGE NIC for both inbound and outbound traffic?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Dwight Cook
Sent: Tuesday, March 23, 2004 10:50 AM
To: [EMAIL PROTECTED]
Subject: Re: GIGE connectivity via TSM







tcpclientaddress only sets the initial address for the server to come in
on (last I remember)
beyond that, standard system/network routing takes over.
I'm still at 4.2  5.1 (moving to 5.2 in the next couple of months) but we
currently put route statements on the client nodes to ensure they exit out
their proper interfaces to access the tsm server(s).


Dwight E. Cook
Systems Management Integration Professional, Advanced
Integrated Storage Management
TSM Administration
(918) 925-8045




  Wholey, Joseph
  (IDS DMDS) To:
[EMAIL PROTECTED]
  [EMAIL PROTECTED]cc:
  .COMSubject:  GIGE connectivity
via TSM
  Sent by: ADSM:
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  03/23/2004 09:28
  AM
  Please respond

Re: 3494 Library and dual gripper?

2003-10-25 Thread Thomas A. La Porte
The dual-gripper is considered a high availability option, thus
each gripper must have access to all cartridges in the library.
Thus, the top two rows of each frame are blocked off because the
second gripper can not reach them. The bottom two rows are
similiarly blocked because the top gripper can not reach them.

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]

On Sat, 25 Oct 2003, Hart, Charles wrote:

How would you loose slots being that the secongripped will be
on the robot ?  I'm clearly missing somethig.


Re: New to TSM

2003-06-12 Thread Thomas A. La Porte
Agreed. We have three AIX servers in our studio--out of over 1000
servers and workstations--and those are our TSM servers.

Granted, we are a Unix shop, but as most non-AIX Unix admins will
tell you, AIX can be a very different beast. Nevertheless, the
amount of adminstrative work we have had to do on these boxes
over the past five years (yes, *five* years) has been negligible.

To bring in another thread, we are moving our TSM servers to
Linux, but not as a result of a disappointment with AIX, simply
keeping in line with corporate strategy.

 -- Tom

Thomas A. La Porte, Dreamworks SKG
mailto:[EMAIL PROTECTED]

On Thu, 12 Jun 2003, Dan Goodman wrote:

Mark Cini wrote:

  ...
 We haven't purchased any hardware yet for TSM, as management wants to be
 sure we pick the right combination.  I am leaning towards TSM server
 running on Win2000 because of our current in-house expertise.

 We are excited about the capabilities of TSM combined with an estimated
 $100k savings over the next year in hardware replacement costs alone.

Have you considered that perhaps the reason you have more inhouse Win2k
expertise is because Win2k requires more support expertise?

Don't forget to factor in the Win2k support costs, and be sure to get
actuals from other users, not just published whitepapers.

A word to the wise...

AIX is a scalable, reliable workhorse, and is the native platform for
TSM. These are not things that should not be overlooked, IMHO.

Dan Goodman
Systems Engineer Specialist
Thomas Jefferson University Hospital
215-503-6808

Daniel.Goodman AT mail.tju.edu




Re: Clear text passwords. Was: Automating dsmserv

2003-05-27 Thread Thomas A. La Porte
Since this topic of clear text passwords has arisen, I wonder if
anybody knows whether or not there is/are any outstanding
requirements or enhancement requests for Kerberos support within
TSM. This would be handy both in the situation discussed below,
and for general administrative and node access to the server.

If there isn't an outstanding request, I'll probably go ahead and
ask that one be made.

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]

On Tue, 27 May 2003, Richard Sims wrote:

Is it just me or does everyone think that placing
sensitive userids and passwords in clear text is just
a bad thing?

It's bad.

Not complaining about the procedure here, you gotta do
what you gotta do, but has anyone complained to IBM
about requiring clear text passwords for this and
other scripts?

There is no requirement for passwords to be in scripts - that was
just someone's conventional implementation of a convenience script.
Discretionary halts of the TSM server - as is the case with any daemon
style application - are best done via its conventional adminstrative
means: in TSM that is disabling sessions thereafter doing a Halt.
(Try to avoid thinking that everything running in a Unix environment
should be controllable via some rc script.)

During a TSM install, that process plants a server start-up method
appropriate to the environment, such as /etc/inittab in AIX.
One can emulate whatever that is in a superuser invocation to
start the TSM server.  In traditional Unix, halting the TSM server
can be achieved automatically during Unix shutdown via /etc/rc.shutdown ,
wherein that root-accesssible-only script would contain a dsmadmc command
with passsword.  It is also conventional in Unix implementations of the
TSM server that the server shuts itself down cleanly when it receives a
SIGTERM signal (the default signal issued by the Unix 'kill' command).

In thinking about sensitive information related to servers in general,
consider that, pretty much by definition, a server should be physically
secure and not be a system used by ordinary users.  Files containing
sensitive information should have directory and file permissions which
restrict access by those needing it.  And where passwords need apply,
various means can be employed to avoid having to code them into files
(sudo, proxy, etc.).

  Richard Sims, BU




3494 library on earthquake pads?

2003-05-27 Thread Thomas A. La Porte
I realize that this isn't *strictly* TSM-related, but I know that
there are many 3494 library users and some of you may have
investigated this before me.

We are about to move our 3494 library within our data center and
are considering moving it onto seismic earthquake pads (see
URL:http://www.worksafetech.com/pages/isobase.html, for an
illustration of what I'm describing). I would like to know if
anybody on the list has done this before or has investigated
doing anything similar.

We have been moving all of our data center equipment onto
ISO-Base platforms, but the 3494 library seems as though it might
have different requirements, what with it's need to stay aligned,
etc.

Feel free to contact me directly or respond to the list if you
have had any experience in this area.

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]


Re: Clear text passwords. Was: Automating dsmserv

2003-05-27 Thread Thomas A. La Porte
On Tue, 27 May 2003, Stapleton, Mark wrote:

One of the nice things about how Tivoli has handled TSM is that the
authentication system is *exactly* the same, no matter what the server
and client OS platforms may be. The same can be said for the interfaces
and the way administration is performed. Inserting something like
Kerberos into the mix would mean you'd have to make it work for all
platforms that the TSM server supports--including MVS, OS/400, and
shudder Windows.

I'm not suggesting that Kerberos should be required for use in
TSM, just that it would be nice if TSM supported it. Having said
that, though, we have a mixed Unix/PC/Macintosh environment, and
we support Kerberos on all of these platforms. With Win2K, it's
essentially built-in to the OS, so I should think that that would
be the platform with the least worries. As far as support for
other platforms, Kerberos runs on all of the server platforms
that the most recent versions of Tivoli Storage Manager supports:
Windows NT/2000, AIX, HP-UX, Solaris, MVS/OS390, and Linux. And
the good bit is that all of those Kerberos implementations share
a common API, so it should not involve much coding to make it
work on all of the platforms.

There are ways of scripting TSM tasks that can sidestep the clear text
stuff, much the same as the ways you script FTP sessions without putting
passwords where users can gefingerpoken.

True. I was merely suggesting that using Kerberos could
solve the problem in a conventional and secure manner.

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]


Re: FastT700 with FlashCopy

2003-03-28 Thread Thomas A. La Porte
On Thu, 27 Mar 2003, Jin Bae Chi wrote:

Suad, Thanks for your help. Let me see if I understand correctly.

1. When you said putting db in backup mode, I can think of
ARCHIVELOG mode for Oracle, which will take a few seconds and
execute the flashcopy;
Q: I heard flashcopy takes a lot less time to create a
snapshot. Any analogy about how fast? How much time would you
gain? Let's say mine takes an hour to hot-backup Oracle through
TDP.

The database should already be running in ARCHIVELOG mode--you
can not make a useful hot/online backup without it. It is also
not something that can be changed while the database is open,
only when it is mounted, so switching between ARCHIVELOG and
NOARCHIVELOG mode would require taking the database down and up.

What Suad meant by putting the db into backup mode was to set
each of the tablespaces into archive mode by issuing the command
'alter tablespace TABLESPACENAME begin backup;'

Presuming that flashcopy is something similar to a Netapp
snapshot, you would put all of your tablespaces in backup mode,
execute the flash copy. Again, presuming it performs similar to a
Netapp snapshot, you should expect that process to be very
short--on the order of seconds to a minute or so depending on the
size of the database. Once the flashcopy is completed, you can
take the tablespaces out of backup mode ('alter tablespace TSPACE
end backup;'). At this point you can simplly transfer the
flashcopy files to TSM via a regular backup interface.

2. Change to NOARCHIVELOG mode for Oracle and mount pseudo volume and backup that 
filesystem.
Q: Should I create a separate set of disks for fs (pseudu vol)? Can I do LV backup?
Q: Can I use, after backing it up, the same fs for testing or report purposes before 
delete it?

Sorry for too many questions. Thanks again.

Hmmm, not quite sure what you're asking here, but I would
recommend a trip through the Oracle docs regarding standby
databases if what you want is a separate reporting instance.


Re: Fragmented Database Maybe?

2003-02-28 Thread Thomas A. La Porte
On Fri, 28 Feb 2003, Farren Minns wrote:

Hi Richard and thanks for the response.

[...]

However, the reason for my post was more that I was concerned about the
number of bytes moved figures (below). The 5000Mb volume is either plainly
being shown incorrectly, or perhaps something more serious is afoot.

Is this something anyone has seen before?

                        5000Mb                -        947,912,704
                        2500Mb                -        2,621,440,000
                        1000Mb                -        1,048,576,000

What you are seeing is the difference between assigned capacity 
and percentage utilized (see the output of q db).

It would appear that your database has an assigned capacity of 
8500MB (5000 + 2500 + 1000), and a percent utilization of 51.8%, 
with the second and third of your data volumes being full, and 
all new database pages being allocated from the 5000MB volume.

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]  


Kerberos support [was Re: password encryption]

2003-02-20 Thread Thomas A. La Porte
While we're on the subject of passwords and password encryption,
is there any chance that TSM might support Kerberos in a future
release?

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]

On Wed, 19 Feb 2003, Seay, Paul wrote:

In encryption speak.  The node name is usually called the public key.  The
private key is what is used to encrypt the message.  This is a nice
implementation because during password change (which is probably in the
message) the new encyption key (password) is not exposed.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Andrew Raibeck [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 8:02 PM
To: [EMAIL PROTECTED]
Subject: Re: password encryption


To clarify my earlier response on this:

The (encrypted) password is not actually sent between client and server,
except when the password is being changed. During authentication, the client
sends the server a message that is encrypted using the password as the key.
The server knows what the decrypted message should be, so if the wrong
password was used to encrypt the message, then the authentication will fail.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




Andrew Raibeck/Tucson/IBM@IBMUS
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 02/19/2003 14:56
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: password encryption



The password is indeed encrypted.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




Prather, Wanda [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] 02/19/2003 14:40
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: password encryption



I've always been told that the password is NOT sent in plain text, it's
encrypted. (but I've never had a sniffer to check it myself).

-Original Message-
From: Eliza Lau [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 10:36 AM
To: [EMAIL PROTECTED]
Subject: password encryption


Does anyone know how the stored password on the client machine is passed to
the server for authentication?

The user has 'password generate' in his dsm.opt.  The password is stored in
the Registry of his Windows 2000 client.  When the TSM client starts is the
password sent to the server in plain text or encrypted?

Thanks,
Eliza Lau
Virginia Tech Computing Center
1700 Pratt Drive
Blacksburg, VA 24060





AUTOMOUNT options file option?

2002-09-25 Thread Thomas A. La Porte

Has anybody successfully used the AUTOMOUNT option in a client
options file, along with the 'DOMAIN all-auto-nfs' statement?

Regardless of what I choose in my AUTOMOUNT directive, I receive
the following error message:

09/25/02   16:55:21 Filesystem /studio/sin/mstr/general is no
automounted filesystem, ignoring option automount for this
filesystem.

As can be seen from the mnttab file, this is indeed an
automounted filesystem:

# grep /studio/sin/mstr /etc/mnttab
auto.studio-sin-mstr/studio/sin/mstrautofs  indirect,ignore,dev=428012b
 1032470327


I get the same behavior on an TSM 4.2 client on Irix, TSM 5.1 on
Solaris, and TSM 4.2 and 5.2 on Linux.

I have also tried it for my home directory 'AUTOMOUNT
/usr/home/tlaporte' with similar results.

For what, exactly, is the AUTOMOUNT directive looking?

The frustrating point is that

'dsmc incr -domain=/usr/home/tlaporte' seems to work fine, and
a query of that node reports a filespace type of NFS3.

Thanks for anybody's thoughts or experiences on the matter.

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]



Re: AUTOMOUNT options file option?

2002-09-25 Thread Thomas A. La Porte

On Wed, 25 Sep 2002, Chetan H. Ravnikar wrote:

tell me what you want to achive and I will try and help you.
I had the same probelm to get autmount working on solaris-2.8 and TSM
4.2.20. Infact never got it to work., But I have a workarround.

In my environment, I have a design to backup auto-mounted netapp filers
snapshot directories over NFS

Cheers..

We are also backing up Netapp filers over NFS, and we have used
several solutions, none of which are completely satisfactory to
us. We are trying to see if we can't work under a supported
configuration, since TSM claims to support automounted
filesystems..



Re: Eternal Data retention brainstorming.....

2002-08-15 Thread Thomas A. La Porte

As a refinement to Wanda's final suggestion, couldn't you alter
your policies for 'del volhist type=dbb' (or simply retain the
current copy of your database backup exclusive of the volume
history), and then modify your storage pool's reusedelay
parameter appropriately?

The drawback that I see is that there is no forever parameter
for reusedelay.  days is the maximum. Granted, that's over 27
years, but we know how long government investigations can last! :-)

-- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]

On Thu, 15 Aug 2002, Prather, Wanda wrote:

Besides that, the best solution I can think of, change all the management
classes to never expire/unlimited versions,
Copy the DB to your test server, lock all the client nodes, put your tapes
on a shelf.
Save the last DB backup, just in case.
Start your production server over with a clean DB, back up everything new
and move on.
If anybody needs their old stuff, get a copy via export (from test) and
import(back to production).

That would keep you from (immediately) doubling your tape requirements, will
cost you some hardware to make tape available for your test system..



Wanda Prather



Re: Help with select statement

2002-07-18 Thread Thomas A. La Porte

I'll take a stab:

select cast(entity as varchar(12)) as Node Name, \
cast(activity as varchar(10)) as Type, \
sum(cast(affected as decimal(7,0))) as files, \
sum (cast(bytes/1024/1024 as decimal(12,4))) as Phy_MB \
from summary where end_time=timestamp(current_date -1 days, '09:00:00') \
and end_time=timestamp(current_date, '08:59:59') \
and (activity='BACKUP' or activity='ARCHIVE') \
group by entity, activity  \
order by Node Name

On Thu, 18 Jul 2002, L'Huillier, Denis wrote:

Hello -

I wrote the following select statement (with a lot of plagiarism).

/* --- Query Summary Table  */
select cast(entity as varchar(12)) as Node Name, \
cast(activity as varchar(10)) as Type, \
cast(affected as decimal(7,0)) as files, \
cast(bytes/1024/1024 as decimal(12,4)) as Phy_MB \
from summary where end_time=timestamp(current_date -1 days, '09:00:00') \
and end_time=timestamp(current_date, '08:59:59') \
and (activity='BACKUP' or activity='ARCHIVE') \
order by Node Name

The problem is, if a node performed 10 backups and 5 archives over the 24 hour period 
there are 15 lines for that node in the output, 10 for backup and 5 for archive.
Is there a way I can add the affected columns and bytes column for a node which has 
activity=BACKUP and again for those with activity=ARCHIVE ?
Basically, what I want is at most 2 lines per node... 1 line can be the sum of 
affected files and bytes for all backup activities
And the other line for that node can be the sum of affected files and bytes for all 
ARCHIVE activities.

I think I'm over my head.


Regards,

Denis L. L'Huillier
212-647-2168





Linux IA-64 client?

2002-06-25 Thread Thomas A. La Porte

Anybody know of any plans for a TSM client for Linux on the IA-64
platform?

Thanks.

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]



Re: Ordering license upgrades

2002-05-17 Thread Thomas A. La Porte

And while we're on the subject of licenses and upgrades, I was
informed by my reseller that Tivoli has withdrawn the TSM
Enterprise Edition from marketing. All of the constituent parts
are still available, but you can not purchase the bundle. Word is
that it was too confusing to end users. I hope it didn't also
have to do with the fact that our software quote went up by
$70,000 when the change was made.

 -- Tom

Thomas A. La Porte, DreamWorks SKG
mailto:[EMAIL PROTECTED]

On Sat, 18 May 2002, Zlatko Krastev wrote:

'cause there is no published rule how to convert points to processors. I
am trying to find the answer since the announcement of new licensing
scheme but still without success.
Just ask them for NN processors or MM points. In both cases you have the
right to use them on different platforms.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Ordering license upgrades

We have a TSM 4.2.1.9 server running under OS/390. We are about to order
more client licenses. The last time we did this IBM and/or Tivoli required
us to tell them how the new licenses would be divided among different
platforms: how many for Windows NT, how many for AIX, and so on. The
charge per client license is the same regardless of operating system. The
'register license' command does not provide any means for distinguishing
between Windows NT licenses, AIX licenses, and so on. Why in the world
do we need to provide platform information in order to get more client
licenses?





Re: Recovery Log almost 100%

2002-05-02 Thread Thomas A. La Porte

I wonder, also, if there is still any discussion about supporting
the use of an alternate RDBMS underneat TSM. It is quite clear
that there are many more sites with database sizes in the
25-50GB+ range. Five years ago I felt very lonely with a database
of this size, but given the discussions on the listserv over the
past year I feel more comfortable that we are no longer one of
the only sites supporting TSM instances that large. It has always
seemed to me that the database functions of TSM have been the
most problematic (deadlock issues, log full issues, SQL query
performance problems, complicated and unclear recommendations for
physical database layout, etc.). All of these problems have been
solved by Oracle, DB2, and Sybase. Granted there is the issue
that plugging in an external database adds greatly to the
complexity of TSM, and reduces it's black box-ness, but I think
the resources are available to administer such a beast at
the large sites that require very large databases.

More food for thought *early* on a Thursday morning.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Thu, 2 May 2002, Paul Zarnowski wrote:

TSM Development is fully aware of the log issue and based on some
conversations at SHARE, I am comfortable that they are taking steps to
address it (with or without a requirement).  I don't think this issue will
be completely solved quickly, as it is a rather complex set of
problems.  In the short term, look for tools to show up that will help TSM
administrators to identify which session has the log tail pinned, and also
address one of the issues that Paul refers to below, which causes the log
head to advance quickly (and shows up as a high dirty page count).

When the log fills, two things happen:  The log tail must be pinned by a
long-running in-flight transaction, and the log head must advance around to
catch up to the tail.  To keep the log from filling, you can either release
the tail or slow down the head.  It is not easy to identify the session or
thread that has the log tail pinned.  I don't know if the tools I refer to
above have shown up in 4.2.2 or 5.1 (we're still running 4.2.1).  There are
a couple of things that can advance the head quickly.  Inventory expiration
and filespace deletion.  If you find yourself in a situation where you see
the log filling quickly and don't know what has the tail pinned, check for
these two processes and kill them if you see them.  This will significantly
slow down the growth rate of the log, and give the oldest in-flight
transaction more of a chance to complete.  We have written a monitor to do
this automatically, and it has really helped us.  If neither of these
processes are running, then you can start guessing about which session
might have the tail pinned.  In this situation, we look for an old session
that has been running for a long time.  This might be a session backing up
over a slow speed line.  If the log nears 100%, we try to avoid it filling
completely by cancelling all sessions (if we have time) or simply HALTing
the server and restarting it.  This generally clears the log when the
server comes back up, and avoids having to do an offline extend of the log
(which has already been discussed).  If you are running
logmode=rollforward, be aware that when you later reduce the log size to
delete the temporary extension, you will (I think) trigger a full database
backup.

If you are at v4.2, you can have a larger log, up to 13GB.  This can also
provide some relief.

..Paul

At 12:13 AM 5/2/2002 -0400, Seay, Paul wrote:
Actually, this was significantly discussed at Share and the basic
requirement is TSM, take action whatever necessary to keep the server up.
Start by cancelling expiration.  Then nail the client that has the log
pinned.  There were also a number of issues discussed.  Apparently, there
are a lot of dirty blocks being recorded in the log that do not have to be.
I am working to get these requirements voted on.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Thomas A. La Porte [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 01, 2002 4:36 PM
To: [EMAIL PROTECTED]
Subject: Re: Recovery Log almost 100%


Given that this is one of the more comman FAQ style questions on this
listserv, I wonder if it's not time for someone to submit a TSM requirement
that the server behave better in a recovery log full situation. This happens
in other databases w/o causing a SIGSEGV. Oracle, for example, simply
prevents any database changes, and only allows new administrative
connections to the database until the log full situation is cleared (by
archiving the online redo logs). It seems that TSM could behave similarly.

Certainly the server is not in a great state when the log segments are full,
but it would seem easier to recover, and somewhat less confusing to
administrators, if it could be done online, rather than in the manner in
which it is handled now. We've all probably

Re: Recovery Log almost 100%

2002-05-01 Thread Thomas A. La Porte

Given that this is one of the more comman FAQ style questions on
this listserv, I wonder if it's not time for someone to submit a
TSM requirement that the server behave better in a recovery log
full situation. This happens in other databases w/o causing a
SIGSEGV. Oracle, for example, simply prevents any database
changes, and only allows new administrative connections to the
database until the log full situation is cleared (by archiving
the online redo logs). It seems that TSM could behave similarly.

Certainly the server is not in a great state when the log
segments are full, but it would seem easier to recover, and
somewhat less confusing to administrators, if it could be done
online, rather than in the manner in which it is handled now.
We've all probably experienced a situation where we are close to
the limit on the log size, so we only extend the log a little
bit, and then there is a rush to see if our database backup is
going to finish and clear the log full condition before we use up
the additional log space--lest we find ourselves in the same
perilous condition, only *closer* to the seemingly arbitrary
maximum log size.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 1 May 2002, Sung Y Lee wrote:

When log reaches 100%, just pray that TSM server process will not crash.


I say the key is prevention.  Whatever you can do to prevent that from
happening is the best answer.

There are many things you can do to prevent from growing to 100%.
One that works for me is I have LogMode set to Roll Forward mode with dbb
trigger at 38% with incremental between at 3(q dbb)
Log is also set to maximum allowed without going over limit plus room for
extension should it ever reaches 100% and TSM crashes.  Have it set at 4.5
GB(To be safe).  Max allowed recovery log for TSM 4.1 is 5.3 GB?? I can't
recall exact value.


If the TSM server is in Log mode than more than likely it will have dbb
trigger set at certain point.   For example,
adsm q dbb

Full   Incremental   Log Full Incrementals
Device Device  Percentage  Between
Class  ClassFulls
-- --- -- 
IBM3590IBM3590 38 3

When the recovery log reaches 38%, an incremental database backup will kick
off up to 3 threes before a full database backup performed.   The most of
time TSM server will prevent other sessions from establishing when the
recovery log reaches 100% but will allow the trigger database backup to
complete and it will bring the recovery log down to 0.  Sometimes TSM will
simply crash.  If it crashes then you will need to do an emergency recovery
log extend and bring TSM backup.

Sung Y. Lee
E-mail [EMAIL PROTECTED]



  brian welsh
  brianwelsh3@HOTMTo:   [EMAIL PROTECTED]
  AIL.COM cc:
  Sent by: ADSM:  Subject:  Recovery Log almost 100%
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  05/01/2002 01:23
  PM
  Please respond to
  ADSM: Dist Stor
  Manager





Hello,

AIX 4.3.3 and server 4.1.1.0.

Last night two archive-schedules had a problem. On the clients there were
files in a kind of loop and TSM tried to archive them. Result, recovery log
almost 100%. This was the first time our log is that high. Problem on the
client solved, but now I have the following question.

I was wondering how other people prevent the log from growing to 100%, and
how to handle after the log have reached 100%.

Any tip is welcome.

Brian.


_
MSN Foto's is de makkelijkste manier om je foto's te delen met anderen en
af
te drukken: http://photos.msn.nl/Support/WorldWide.aspx





Re: About TSM API

2002-04-29 Thread Thomas A. La Porte

Fred,

If by talking to the server without a client you mean running
administrative commands, there is no TSM administrative API. The
client API is for backup/restore and archive/retrieve operations,
and that is the only API available.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Mon, 29 Apr 2002, Fred Zhang wrote:

Hi,

I am looking for TSM API. So far I can only find Client API. Does TSM has
API which we can use to talk to the server without a client installed? Any
information would be greatly appreciated.

=
Fred Zhang
NetiQ Corporation
3553 N. First St.
San Jose, CA 95134
phone: (408)856-3102
fax: (408)856-3102
e-mail: [EMAIL PROTECTED]
=





Re: ANR0534W - size estimate exceeded

2002-04-26 Thread Thomas A. La Porte

We have seen this behavior in an API client that we have written,
and we have verified that (a) COMPRESSION is set to NO and (b) we
are sending exactly the same number of bytes that we say we are
sending. We have determined that there is essentially a race
condition when there are multiple sessions writing to the disk
pool. If, at a given point in time, four clients contact the
server and announce the intention to write a 100MB, the server
checks at the beginning of each of the four sendObj() calls. If
there is 150MB of space available in the diskpool at the time
that each sendObj() call happens (API programmers forgive me if
I've gotten the calls slightly incorrect, I've not written the
API client, only assisted as the TSM admin), the server gives
each of the four clients the OK to send the data. Of course, they
will all likely fail, as they will fill up the 150MB availables
space before any *one* of the clients finishes.

This has been one of the frustrating aspects of the TSM BA client
not being written with the TSM API. Clearly the BA client does
not encounter this problem, yet there is no straightforward
method to solve the problem with the API. I believe that the TDP
products are all written with the API, so I am not surprised to
hear that they suffer from this problem, as well.

AFAIK, the best we have been able to get from support is the
admission that the error message is inaccurate, or at the very
least misleading.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Fri, 26 Apr 2002, Tomas Hrouda wrote:

Gerhardt,

sending of incorrect filesize obviously happens (by my experiences) when
commpression is turned on and compressed file grows up to original file. Try
to use no compression or COMPRESSALWAYS NO to avoid bad file size prediction
by client. It was discussed here sometimes in the past.

Hope this helps.
Tom

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gerhard Wolkerstorfer
Sent: Friday, April 26, 2002 12:39 PM
To: [EMAIL PROTECTED]
Subject: Antwort: Re: Antwort: ANR0534W - size estimate exceeded


Zlatko,
you are right, BUT..when the TDP sends an incorrect Filesize to the TSM
Server, the maxsize Parameter won't work
(TDP sends 100 Byte - the server will let the File go to the diskpool, but
the
file will indeed have 20 Gb, which will fill up your diskpool and bring up
the
message indicated (storage exceeded))

And for tracing purposes I wanted to know, if there is any possibility to
check
the filesize, which the client is sending to the server.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Zlatko Krastev) am 26.04.2002 11:34:22

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Antwort: ANR0534W - size estimate exceeded



Isabel, Gerhard,

you can set MAXSIze parameter of the disk pool. I usually set it about
30-60% of the diskpool size (or better pool free size, i.e. size -
highmig). Files larger than this would bypass the diskpool and go down the
hierarchy (next stgpool). For this might be tape pool, i.e. file will go
direct to tape.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Antwort: ANR0534W - size estimate exceeded

Isabel,
we still have this problem regarding to the TDP Informix.
It seems, that the TDP (sometimes ?) isn't sending the correct Filesize
and the
File (DB Backup) exceeds your DISKPOOL and cannot swap to the Tapepool.
If the TDP would send the correct Filesize, TSM would possibly go direct
to tape
and the problem wouldn't come.

Question: How can I check the Filesize and/or Filename, the client is
sending to
the server ??

MfG
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Isabel Gerhardt) am 26.04.2002 09:17:44

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:ANR0534W - size estimate exceeded



Hi Listmembers,

we recently started to recieve following errors:

04/23/02 20:30:29 ANR0534W Transaction failed for session 1 for
node
   NODE1 (WinNT)
   - size estimate exceeded and server is unable to
obtain
   additional space in storage pool DISKPOOL.
04/24/02 20:38:19 ANR0534W Transaction failed for session 173 for
node
   NODE2 (TDP Infmx AIX42) - size estimate
   exceeded and server is unable to obtain
additional space
   in storage pool DISKPOOL.

From previous Messeges of the List I checked, that the Diskpool has
caching disabled and the clients have no compression allowed.

I was away from work for a while and meanwhile a serverupdate has been
done.
If anyone can point me to the source of this error, please help!

Thanks in advance,
Isabel

Re: TDP for NDMP

2002-04-23 Thread Thomas A. La Porte

We are definitely anxious to see file-level granularity. This
(NDMP support) is something that Legato has had for well over a
year now, so TSM is a bit behind in its support. Given the
success of Network Appliance, I would think that there is
becoming a large enough user base that is interested in this
support.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Tue, 23 Apr 2002, Don France (TSMnews) wrote:

You've just about gottit all.  There is some concern about this whole thing;
currently, you are limited to image backups using NDMP;  if you want
file-level granularity, you must use NFS mount to access the file system
from a supported b/a-client... not as pretty as you'd like, but that's the
way it is.  Then, too, you are limited to NAS filer;  Auspex and other
filers are not supported.

The storage pool limitation comes from the format used to store the data;
they invented a new concept/format for NDMP storage pools, so must be kept
segregated from standard storage pools;  I THINK you can still use backup
stg command -- but you cannot intermix the different types of primary
storage pools in a common copy-pool.

IBM/Tivoli is wondering how much market there is to justify expanding this
support -- (a) file-level granularity and/or (b) other filers.  IMHO, I'd
want to see file-level granularity.  Comments can be posted here...

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]



- Original Message -
From: Gerald Wichmann [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, April 17, 2002 9:47 AM
Subject: TDP for NDMP


 Has anyone used and had any experiences with NDMP backups? I'm curious
 what kind of experiences you've had using the product. Pros/cons, etc..

 I've been reading the admin guide section on it and it seems like a
 whole other beast to what I'm used to with TSM. E.g. your NDMP file
 server has to be attacked to the tape drive. Seems like this would be
 undesirable if you had a lot of file servers. Would you have to buy a
 comparative # of tape drives? Or how many tape drives could you share
 between your file servers and TSM server?

 Theres a comment about you cannot backup a storage pool used for NDMP
 backups. Does this mean you can't create a copypool and 2nd copy of the
 backed up data?

 Do the NDMP backups only backup the entire file server and restore the
 entire file server? I guess it's not clear to me on what level of
 granularity there is in backing up NDMP file servers.

 Seems like NDMP backups require their own TSM domain/policyset/etc
 structure as well.

 Also I understand it only works with Network Appliance NAS devices?

 Appreciate anyone with practical knowledge to comment on their
 experience and perhaps clarify some of my questions above.. thank-you





Re: How Best to Suspend a Schedule

2002-04-19 Thread Thomas A. La Porte

That would involve recycling the server, wouldn't it? I'm
guessing from Paul's e-mail that he's looking for something he
can do online.

What I've done in the past is update the start date of the
schedule to the following day, that way it won't run in the
coming evening, but will pick up again the next day without me
having to intervene.

On Fri, 19 Apr 2002, Edgardo Moso wrote:

For #1 you can set , disablesched   Yes  in the dsmserv.opt file.





From: Kilmer, Paul [EMAIL PROTECTED] on 04/19/2002 09:02 AM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:
Subject:   How Best to Suspend a Schedule

As a relative newbie to TSM, I'd be interested to hear what is the easiest
way to temporarily:
1- Suspend a backup schedule from running.
2- Suspend an individual client's participation in a backup schedule.

I've accomplished #2 by removing the client schedule association, though I
don't regard this method to be particularly elegant.

Thanks in advance.

Paul E. Kilmer
Lead Technical Architect
New World Pasta
[EMAIL PROTECTED]
(717) 526-2431

||
|*http://www.eteamz.com/cdboys www.eteamz.com/cdboys  *|
||
||





Re: Help Urgent!

2002-04-11 Thread Thomas A. La Porte

This often means that the tape volume on which the object resides
in in status UNAVAILABLE.

Check your activity log on the server at the time of the restore.
You should see additional information.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Thu, 11 Apr 2002, Amini, Mehdi wrote:

While restoring Win2K systemobject, I got messages Data unavailable to
Server

What does this mean?

Thanks

Mehdi Amini
LAN/WAN Engineer
ValueOptions
3110 Fairview Park Drive
Falls Church, VA 22042
Phone: 703-208-8754
Fax: 703-205-6879




**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the sender by email, delete and destroy this message and its
attachments.


**





Re: Restore TSM server to different OS?

2002-03-19 Thread Thomas A. La Porte

Search the archives.

Although it may be technically possible to do this move (though I 
doubt that you could go from Windows to a Unix platform), it is 
definitely not supported.

Wanda Prather had an excellent summary of the discussion when 
this came up again last fall:

http://msgs.adsm.org/cgi-bin/get/adsm0111/951.html

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Tue, 19 Mar 2002, Daniel Sparrman wrote:

Hi
 
Restoring the TSM server(database) to a different OS will work fine as
long as you keep to an open systems platform such as Windows NT/2000,
HP-UX, AIX or Solaris. Doing this on a AS/400 or S/390 wont work.
 
This will apply to doing a restore of the TSM database.
 
Best Regards
 
Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Bergkällavägen 31D
192 79 SOLLENTUNA
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51

-ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote: -

To: [EMAIL PROTECTED]
From: Hans Hedlund [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
Date: 03/19/2002 06:15PM
Subject: Restore TSM server to different OS?

Is it possible to restore a TSM 4.1 server running on Windows NT to a
Solaris server?

I'm practicing some DR stuff, and my scenario is that one of our WinNT
TSM4.1 servers has crashed, and we need to restore the latest BACKUP DB
data, but onto a Solaris server on our disastersite.

Is that possible? I receive mysterious ANR1368W Input volume
16440215.DBB
contains sequence number 16777216; volume sequence number 1 is required.
when doing my DSMSERV RESTORE DB.

Regards, Hans Hedlund




Confidentiality Notice

The content of this e-mail is intended only for the
confidential use of the person(s) to whom it is addressed.
If the reader of this message is not such a person, you are
hereby notified that you have received this communication
in error and that reading it, copying it, or in any way
disseminating its content to any other person, is strictly
prohibited. If you have received this message in error,
please notify the author by replying to his e-mail immediately.







Re: Oracle on Linux backups

2002-03-13 Thread Thomas A. La Porte

There was an old Redbook on using ADSM to backup databases. It
includes some very good examples of using shell scripts to
accomplish hot backups of Oracle (and many other databases)
without the need for a TDP or RMAN style backup.

We use a method like this to backup all of our Oracle instances
(Linux or otherwise). None of our instances are large enough to
warrant the additional complication of RMAN or TDP.

In addition to the Redbook above, if you are at all involved in
the backup and restoration of Oracle databases, I would *highly*
recommend the Oracle Press book Oracle Backup and Recovery by
Velpuri, et al (it exists in several editions for the various
versions of Oracle).

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 13 Mar 2002, Zoltan Forray/AC/VCU wrote:

I had earlier asked the question about using the TSM TDP for Oracle on a
Linux box but never got a response.  From the silence and other messages
related to this topic, I can only surmize that this won't/doesn't work and
I shouldn't waste my time/money in purchasing the TDP.

So, since I can't change the platform, I need to know how anyone else in
this situation/configuration backs up their Oracle systems on Linux ? Or
are most folks waiting for TSM 5.x and the supposed forthcoming TDP that
will work on Linux and are keeping Oracle on supported/backup-able OS'es ?

Note, I am not a DB person. I don't know the first thing about Oracle. I
have seen some discussions about an RMAN program/utility, but I only know
the name.

The objective is to do live (i.e. not shutting down the application)
backups of the Oracle DB and be able to restore it on a backup/test
machine.

We do a nightly backup but all attempts to restore from these backups to
another box, have produced useless files. Oracle is very unhappy !!

Any and all help would be greatly appreciated.

Zoltan Forray
Virginia Commonwealth University - University Computing Center
e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807





Re: Backup to multiple tape simultaneously

2002-03-13 Thread Thomas A. La Porte

dsmadmc -id=admin -pa=admin upd stg POOLNAME collocate=filespace

On Wed, 13 Mar 2002, ARhoads wrote:

except for all of the contention for tapes...
- Original Message -
From: Bill Smoldt [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, March 13, 2002 10:08 AM
Subject: Re: Backup to multiple tape simultaneously


 Although it is very easy to start a dsmc restore filesystem for each
 filesystem and do parallel restores today.

 Bill Smoldt
 STORServer, Inc.



API / Client compatibility

2002-02-07 Thread Thomas A. La Porte

We are in the process of writing an Linux archive client through
the TSM API (4.2), storing objects in an TSM 4.1 server running
on AIX. We would like to avoid writing a retrieve client, as
we're satisfied with the functionality of the TSM BA client. I
know there have been interoperability issues in the past, but the
API manual clearly addresses *some* of these issues.

What we are wondering is how file permissions need to be stored
by the API client so that the BA client can retrieve them
properly.

We attempted to pass a struct stat in the objAttrPtr.objInfo
(see below), but when we retrieve the files using the BA client,
all files are retrieved with perms set to 000 and a timestamp
that is very wrong (e.g. Dec 5, 1930 on some files :-), rather
than the perms and timestamp stored in the struct stat.

This was just a guess (which was apparently wrong), but it's
completely unclear as to what *would* be correct. Page 74 of the
API Manual states the following in reference to using
restore/retrieve commands for API objects:

  Note: Use these commands for exception situations only.

  These commands return data as bit files that are created
  using default file attributes. You can restore or retrieve API
  objects that other users own, or that are from a different
  node. The set access command determines which objects qualify.

If it were truly using default file attributes, I would have
expected them to be umask based. But the fact that perms are set
to 0 and the timestamps are set consistently wrong indicates that
the BA client *is* attempting to use the struct stat info in
objInfo.

Has anybody reconciled this? Has anybody successfully written an
API client for which objects can be retrieved or restored using
the BA client?

Thanks,

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]



if ((rc = lstat(filelist[i].filename, sb)) != 0)
{
fprintf(stderr, stat %s: , filelist[i].filename);
perror();
continue;
}

[...]

ObjAttrData.objInfoLength = sizeof(sb);
ObjAttrData.objInfo = (char *)sb



Re: Network Appliance - Filer backup - (DAR) Direct Access Restore

2002-02-06 Thread Thomas A. La Porte

My understanding is that NDMP support will be added to the AIX
server in TSM version 5. Don't know if that version changes the
requirements of direct-attached.

Why Tivoli chose to start with the Windows platform for an
enterprise-level protocol like NDMP is still beyond me. We are
eagerly anticipating support for NDMP on AIX.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 6 Feb 2002, Prather, Wanda wrote:

 but, from what I have read on the Tivoli web site, you can only get
NDMP support if your tsm SERVER is on Windows, and if your tape drives are
direct attached to the NAS device, which means you  have to be SAN.

Am I reading that correctly, or am I missing something here?

-Original Message-
From: Joshua S. Bassi [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 05, 2002 7:49 PM
To: [EMAIL PROTECTED]
Subject: Re: Network Appliance - Filer backup - (DAR) Direct Access
Restore


TSM support NDMP at this time.


--
Joshua S. Bassi
Sr. Solutions Architect
IBM Certified - AIX/HACMP, SAN, Shark
Tivoli Certified Consultant- ADSM/TSM
Cell (415) 215-0326
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Gates, Gerald P.
Sent: Tuesday, February 05, 2002 4:44 PM
To: [EMAIL PROTECTED]
Subject: Network Appliance - Filer backup - (DAR) Direct Access Restore

Can Tivoli backup up a Network Appliance - Filer using (DAR) Direct
Access
Restore? Veritas said they will have this in version 5.0 later this
year.
Gerry





Re: FILESPACE_NAME vs. HL_NAME

2002-01-03 Thread Thomas A. La Porte

Are these by any chance NFS mounted directories? We do a lot of
archives with loopback-mounted filesystems, and over the course
of multiple iterations of the dsmc client we have found that
ADSM/TSM have treated these filesystems differently wrt to what
is FILESPACE_NAME and what is HL_NAME.

The proper way to find these files with a 'dsmc q arch' or
'dsmc retr' is to explicitly name the filespace in your filespec.

So, in your example, you have a single file which has been
archived under three different filespace names. Assuming you were
looking to retrieve the version of the file archived on
2001-12-22, you would issue the following command:

dsmc retr '{/WORK/DATA}/MARTINE/BS0266'

If you do not specify the filespace in your filespec, ADSM/TSM
will use the filespace that has the *longest* match, e.g., in
your case by default it will always find the version archived on
2001-11-20.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On 01/02/2002 Mike Crawford wrote:

Good Afternoon,

A client is having trouble locating archived files using dsmc q ar.  The
problem
seems to be that the filespace_name and hl_names differ between the files,
even
though they were archived from the same place.

Server: AIX, ADSM v3.1
Client: SunOS, ADSM v3.1

An example:
select filespace_name,hl_name,ll_name,archive_date from archives where
node_name='HUBER'
and ll_name='BS0266'


FILESPACE_NAME: /WORK
   HL_NAME: /DATA/MARTINE/
   LL_NAME: BS0266
  ARCHIVE_DATE: 2001-12-22 10:46:30.00

FILESPACE_NAME: /WORK/DATA
   HL_NAME: /MARTINE/
   LL_NAME: BS0266
  ARCHIVE_DATE: 2001-12-22 10:41:24.00

FILESPACE_NAME: /WORK/DATA/MARTINE
   HL_NAME: /
   LL_NAME: BS0266
  ARCHIVE_DATE: 2001-11-20 05:38:10.00


Depending on how the client specifies the request, they will get a
different
version of the file.

The question is, how does ADSM determine which part of the path is
considered
filespace_name, and which is hl_name?

Thanks,
Mike





Re: Clarifications: LAN Free ( SAN ) Backup

2001-11-15 Thread Thomas A. La Porte

I take it to mean that any given FILE on the SAN device can be
written to by only one client at a time, but that many files (up
to the MOUNTLIMIT parameter) can be written to at once, meaning
the maximum number of clients writing simultaenously to the SAN
device is equal to the MOUNTLIMIT parameter in your FILE device
class.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Thu, 15 Nov 2001, Wouter V wrote:

So : A FILE devclass device may be in use by only one party.,  means only
one client a time can backup to
the SAN-disk device ?  The SAN disk is shared, is available for every
client, but can not be used concurrent
by every client ? Wright ?

Wouter.





Re: Merging two server into one

2001-10-31 Thread Thomas A. La Porte

You will require additional storage resources, either tape or
disk. The export facility will write all of your TSM data to new
storage, and the import will then transfer that data to still new
storage (which could be the original storage on the exported
server). In any case, you will need to have double the amount of
space available for the actual export, because the data will
occupy at least two locations for the duration of the
export/import process.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 31 Oct 2001, Jolley, Bill wrote:

If the Private and Scratch Categories are different for each server, how
then would the import/export facility handle those? We took over ADSM with
two servers located
on one AIX SP node and would like to consolidate them. I have sought
assistance from
TSM support and received very little. Could this be a difficult task?



-Original Message-
From: Joshua S. Bassi [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 31, 2001 11:59 AM
To: [EMAIL PROTECTED]
Subject: Re: Merging two server into one


You can use the TSM import/export facility to do this.


--
Joshua S. Bassi
Independent IT Consultant
IBM Certified - AIX/HACMP, SAN, Shark
Tivoli Certified Consultant- ADSM/TSM
Cell (408)(831) 332-4006
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Andreas Rensch
Sent: Wednesday, October 31, 2001 7:54 AM
To: [EMAIL PROTECTED]
Subject: Merging two server into one

Hi TSM-/ADSM-ers,

is there a possibility to merge two TSM servers into one?

Our situation describes as follows : We have two TSM servers - both at
OS/390 - which share the same range of tapes. Now we want to move to the
Sun
Solaris platform. We planned to take our first TSM server as a base for
the
new TSM server on Solaris. But we have about 150 tapes on our second TSM
server and want to avoid to copy all these tapes. Is there another way
to
get this data into the first TSM server?

One option is to define the clients of the second TSM server at the
first
TSM server, make first a full backup and then incremental backups. The
problem with this option is that we loose our backup history.

Any other idea?

mfg / regards

andreas rensch / StorageManagement / rz-qs
phone : +49(0)6171.66.3692 / fax : +49(0)6171.66.7500.3692 / mobile :
+49(0)172.6649.016  / pse : 75 / mailto:[EMAIL PROTECTED]
Alte Leipziger Lebensversicherung aG - Alte Leipziger Platz 1 - D 61440
Oberursel - http://www.alte-leipziger.de

When all else fails, read the instructions.





Re: oracle backup(linux)

2001-10-25 Thread Thomas A. La Porte

Well, like I said, I spoke with my IBM rep and was told that it
*could* be available with TSM v5 in the first half of 2002. Since
there has been no announcement, we are left to our own design in
making determinations about IBM/Tivoli's product directions. It
seems safe to assume that a TDPO for Linux might become
available, given IBM's heavy push in the Linux arena, but without
any announcements I, and in turn my rep, have to seek out this
information, and report it back whatever unofficial information
that we have garnered.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Thu, 25 Oct 2001, Thiha Than wrote:

hi,

I am not sure where you got the information about TDPO for Linux
availability in first half of 2002.  We are planning for it but don't
count on having it in the first half of 2002.

regards,
Thiha

I just recently spoke with my IBM rep on this, and the word I got
back that, although there are no specific times set or announced,
a TDP for Oracle could be available on Linux with TSM v5 in the
first half of 2002.





Re: oracle backup(linux)

2001-10-24 Thread Thomas A. La Porte

I just recently spoke with my IBM rep on this, and the word I got
back that, although there are no specific times set or announced,
a TDP for Oracle could be available on Linux with TSM v5 in the
first half of 2002.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 24 Oct 2001, Joshua S. Bassi wrote:

There aren't any TDP agents for Linux at this time.

The best way to backup the Oracle data on Linux would be to backup or
archive the Oracle dump.


--
Joshua S. Bassi
Independent IT Consultant
IBM Certified - AIX/HACMP, SAN, Shark
Tivoli Certified Consultant- ADSM/TSM
Cell (408)(831) 332-4006
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
±èÀο±
Sent: Tuesday, October 23, 2001 9:09 PM
To: [EMAIL PROTECTED]
Subject: oracle backup(linux)

Is there tdp for oracle(linux)??
If so, what is the procedure for backing up oracle??

server: solaris 2.6
client: redhat 6.2

Thanks in advance..

CD DATA CORP. IDC TEAM. KIM IN YOP
14th Floor,Sam Jung Bldg.
237-11,Nonhyun-Dong,Gangnam-Gu,
Seoul,Korea 135-831
TEL: 82-2-546-3108
MOBILE: 016 523-2032
FAX: 82-2-514-9007





Re: Expiration challenge

2001-09-21 Thread Thomas A. La Porte

I can think of two options.

1) Save an additional backup of your (ADSM) database, and take
   measures to ensure that the tapes upon which your (Informix)
   database reside do not get reused.

   This will allow you to restore the ADSM database at some stage
   in the future, and the tapes that you care about with the
   Informix database will still be available from which you can
   restore the Informix OLTP data.

2) Restore a copy of the Informix OLTP database to some location,
   and then run an *archive* of that data.

Based on your description of the environment, unless you can get
a vendor to loan you equipment for the process, I would think
that of the two options I've proposed, #1 is the better choice.
I'm sure others on the list might approach this question from a
different angle and offer further suggestions.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Fri, 21 Sep 2001, Prose, Fred wrote:

We run Tivoli 4.2 on AIX (4.3.3) with a primary emphasis on backing up
Informix databases using the Informix TDP.  We defined a database
management class for the database backups, with retain extra versions set
to 30 days.  All database servers (13) go to this same class.  This has
always proved quite adequate, as the requirement has been to provide
recovery capabilities, not archive historical copies of the data.

We now have a Law Enforcement organization interested in some unusual
activities that occurred in one of sixty databases that resides on one of
the servers.  The request they placed was to retain the oldest copy that
we had of this database until they were through with their investigation.

Restoring the database to a different server is not as simple an option as
it first might seem.  With an Informix database backup, you are backing up
the entire OLTP environment, not just a single database.  This entire
environment would need to be restored to a different server and as luck
would have it, the server in question happens to be the biggest and no other
server could handle the restore (space wise).  Saving the current
environment, restoring from the backup, extracting the database in question,
and then restoring the current environment I've estimated to take twelve
hours.  While it might be the only option, it's not a popular one.

To prevent the expiration I set retention to 365 days on the management
class.  That will hopefully provide the opportunity to sort things out.  The
problem now is that without expiring versions, I'm not recycling tapes (as
expected) and buying additional tapes during a budget crisis is a difficult
and long process.

Here's the questions:

Since I know what tapes (2) the current backup is located on, is there a way
to freeze those tapes so that the files contained on the tapes do not
expire?

If I set up a new management class for this one server, can I force existing
files from this server into this class?





Re: Backup of EMC

2001-09-06 Thread Thomas A. La Porte

Jeff,

NDMP support is meant to be available at the end of this month.
That they haven't set pricing information for this yet makes me
wonder if it will actually be available any time soon. I'm
skeptical, but hopeful, as we've several TB of NetApp storage in
our environment.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Thu, 6 Sep 2001, Jeff Bach wrote:

IBM people,

How does IBM/Tivoli plan to provide for this need in order
to prevent customer from being required to implement hardware vendor
solutions such as EDM to backup this data?   The competition is direct fiber
attaching to the data and backing it up.


Jeff Bach
Home Office Open Systems Engineering
Wal-Mart Stores, Inc.

WAL-MART CONFIDENTIAL


-Original Message-
From:   Michael Bartl [SMTP:[EMAIL PROTECTED]]
Sent:   Thursday, September 06, 2001 9:01 AM
To: [EMAIL PROTECTED]
Subject:Re: Backup of EMC

Mahesh,
on NAS boxes like CLARiiON or NETAPP you won't find a backup/archive
client that runs directly on the machine.
To get your backup done just use another machine in your LAN that
has
enough network bandwith available to both the NAS box and the TSM
server.
On WinNT you can use the UNC-Name in dsm.opt:
DOMAIN  \\NASBOX\SHARE

With Unix you have to define a mountpoint for the tree you want to
backup. Then you put the mountpoint into your optionfile.

Good luck,
Michael Bartl

Office of Technology, IT Germany/Austria
Cable  Wireless Deutschland GmbH.
Landsberger Str. 155Tel.: +49 89 92699-806
80687 Muenchen  Fax.: +49 89 92699-302
mailto:[EMAIL PROTECTED] http://www.de.cw.com

Mahesh Tailor wrote:

 Hello, everyone.

 A group in our department just received an EMC CLARiiON system.
On this system is a filesystem that I need to backup.  How can this be done?
I have never dealt with this beast.

 3466 Network Storage Manager running TSM v3.7.4 and AIX 4.3.2.

 Thanks for any help and advice in advance.

 Mahesh Tailor
 WAN/NetView/TSM Administrator
 Carilion Health System
 Voice: 540-224-3929
 Fax: 540-224-3954


**
This email and any files transmitted with it are confidential
and intended solely for the individual or entity to
whom they are addressed.  If you have received this email
in error destroy it immediately.
**





Re: Transfer Rate

2001-08-02 Thread Thomas A. La Porte

Also, if you're client and server are on the same host, the
network shouldn't enter into the picture, you should be using
SHAREDMEM as your protocol.

 -- Tom

On Thu, 2 Aug 2001, David Longo wrote:

Your original post of actlog stats didn't include how many objects
inspected/backed up.  If you have a LOT of small files that would
account for some of it.

What type of network do you have?

I just looked at some of my stats and I have on the clients I checked
a Network to Aggregate data transfer time of 2 -1 to 3 -1.

How many sessions does the client use?  You can set
RESOURCEUTILIZATION in dsm.sys on client to increase number
 of sessions - see docs for info.  Experiment to see best for you.

I would think you can get better than 8 hours for 70GB.  I imagine
4 -6 hours is as good as you can  do.

Have TSM 3.7.4.0 server on AIX 4.3.3 on F50 with 4x 332Mhz CPU
and 2 GB RAM.  IBM 3575 library.  Using 155MB ATM network.  (Some clients have 100MB 
Ethernet.)


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 08/02/01 01:47PM 
The only client we have is where the server is located.  We are running TSM
to backup one RS/6000 F50 server.  And backing up 70 GB in 8 hours doesn't
seem right.  Is there anything that I can do to speed this up.  We run a
Magstar 3570 connected to the F50 server through SCSI.

TSM Version 4.1.2.0
AIX Version 4.3.3.0

-Original Message-
From: Lindsay Morris [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 02, 2001 10:19 AM
To: [EMAIL PROTECTED]
Subject: Re: Transfer Rate


Aggregate uses wall-clock time;
Network uses the intervals between when the client says I'm sending data
now, to when the server actually receives it.

It's perfectly normal to have aggregate be a lot slower than network.

If you really want to speed up your backups, you need to
look at the accounting log records (.../server/bin/dsmaccnt.log - see admin
guide for layout)
to see which of your clients suffer most from idle wait, media wait, or comm
wait.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Bill Wheeler
 Sent: Thursday, August 02, 2001 9:55 AM
 To: [EMAIL PROTECTED]
 Subject: Transfer Rate


 Hello All,

 I hope a few of the *SMer guru can help me out on this one.   I don't
 know if you have had a problem like this before.   The problem is our
 Aggregate Transfer Rate is allot lower then the Network Transfer
 Rate.   Is
 there something that I can look at to see if I can get the data moved
 quicker?Here is a copy of the Activity log with the rates:


 ANE4961I (Session: 4532, Node: F50_CLIENT)  Total number

 of bytes transferred:69.33 GB07/31/01 06:51:30

 ANE4963I (Session: 4532, Node: F50_CLIENT)  Data transfer

 time:11,858.22 sec   07/31/01 06:51:30

 ANE4966I (Session: 4532, Node: F50_CLIENT)  Network data

 transfer rate:6,130.86 KB/sec07/31/01 06:51:30

 ANE4967I (Session: 4532, Node: F50_CLIENT)  Aggregate data

 transfer rate:  2,462.14 KB/sec  07/31/01 06:51:30

 ANE4968I (Session: 4532, Node: F50_CLIENT)  Objects

 compressed by:0% 07/31/01 06:51:30

 ANE4964I (Session: 4532, Node: F50_CLIENT)  Elapsed

 processing time:08:12:07


 Any advice would be helpful.

 Thanks in advance,

 Bill Wheeler
 AIX Administrator
 La-Z-Boy Incorporated
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]




MMS health-first.org made the following
 annotations on 08/02/01 14:20:19
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==





Re: Backing up lots of small files

2001-07-31 Thread Thomas A. La Porte

Is it possible to break up the filesystem(s) with
VIRTUALMOUNTPOINT options? This will let you break down the
problem into smaller chunks, and possibly run your 'dsmc'
processes out of cron, rather than via the scheduler.

On Tue, 31 Jul 2001, Gerald Wichmann wrote:

Has anyone come up with any good strategies on backing up a file server with
millions of small (1KB-10KB) files? Currently we have a system with 50GB of
them that takes about 40 hours to backup. Many of the files change..

I m wondering if anyone has anything in place that works well for this sort
of situation..

Something like this or perhaps something I haven t thought of:

develop script that will tar/compress the files into a single file and back
that single file up daily. Perhaps even search the file system by date and
only backup files that have changed since the last backup (this seems like
it wouldn t be any faster then simply backing them up though)





Netapps for ADSM Server?

2001-07-13 Thread Thomas A. La Porte

Just out of curiosity, has anybody experimented with using a
Network Appliance, or other similar network attached storage
devices, for their ADSM database or logs?

We are using Network Appliance Filers very successfully on our
Oracle servers, so I was curious about the possibility of using
them on our ADSM servers. Just wondering if anybody else has gone
down this path before me, or have even thought about it.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]



Re: Data on my tapes

2001-06-20 Thread Thomas A. La Porte

Jeff,

Compressing more than once generally doesn't gain anything in
terms of space reduction, in fact there are certain instances in
which additional compression passes result in *larger* files.

Since you are compressing at the client side, and you are
averaging about 35GB on a 30GB native tape, I'd say you're
getting pretty good storage utilisation.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 20 Jun 2001, Jeff Bach wrote:

What is my problem?

I am just going to 3590K.  I use client compression on everything.  I
compress on the tape drives also.

I tried to define the format as 3590b, but it doesn't allow me to access the
drives.  I currently have it set to drives.

Has anyone seen this?  This seems to be a small amount of data.

H00924DB_TPOOL TAPE3590K35,694.3  100.0Full

H00925DB_TPOOL TAPE3590K35,490.6  100.0Full

H00926DB_TPOOL TAPE3590K35,765.3  100.0Full

H00927TAPE3590 TAPE3590K36,007.6  100.0Full

H00928TAPE3590 TAPE3590K40,960.0   29.4  Filling

H00929DB_TPOOL TAPE3590K35,635.3  100.0Full

H00930DB_TPOOL TAPE3590K35,554.6  100.0Full

H00932DB_TPOOL TAPE3590K40,960.0   49.2  Filling

H00933DB_TPOOL TAPE3590K35,882.4  100.0Full

H00934DB_TPOOL TAPE3590K35,904.1  100.0Full

H00935DB_TPOOL TAPE3590K35,629.7  100.0Full

H00936DB_TPOOL TAPE3590K35,887.2  100.0Full

H00937DB_TPOOL TAPE3590K35,786.8  100.0Full

H00938DB_TPOOL TAPE3590K35,710.0  100.0Full


Jeff Bach

 -Original Message-
 From: Richard Sims [SMTP:[EMAIL PROTECTED]]
 Sent: Wednesday, June 20, 2001 10:28 AM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Data on my tapes

 Ok, how much data are you getting on a 3590 K tape.

 David - A rough visual average that I see on my full tapes is 58 GB,
 in backup and copy storage pools.  (If using HSM, expect
 much less, because it doesn't Aggregate.)

   Richard Sims, BU


**
This email and any files transmitted with it are confidential
and intended solely for the individual or entity to
whom they are addressed.  If you have received this email
in error destroy it immediately.
**





Upgrading to 4.1

2001-06-14 Thread Thomas A. La Porte

Platform: AIX 4.3.2
ADSM: 3.1.2.50

We have been testing 4.1 on a system that already has a 3.1
instance installed. We are now satisified with 4.1, and would
like to migrate our 3.1.2.50 instance to 4.1. The manual
indicates directions on how to do the migration at the time of
installing the 4.1 software--a step that we've obviously already taken.

I thought that what I would have to do is

 1) do a full database backup of the 3.1.0.5 instance

 2) copy my relevant files (dsmserv.dsk, etc.) to the new
DSMSERV_DIR (/usr/tivoli/tsm/server/bin)

 3) start the TSM 4.1 server with 'dsmserv upgradedb'

 4) re-register all of my licenses


Has anybody else gone through the process in this manner? Am I
missing something?

Any advice/tips/suggestions are welcome.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]



Re: Network Appliance

2001-05-09 Thread Thomas A. La Porte

John,

Is there a reference to a product announcement of any sort?

Searching Tivoli's web site for NDMP, NAS, or TSM 4.2 does not
turn up anything other than the future directions white paper
that has been there for about a year which frustratingly
discusses NDMP support in the present tense.

  -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 9 May 2001, John Bremer wrote:

Geoff,

Tivoli announced that TSM version 4.2 should provide backup and restore of
NAS filers -  3Q 2001.  Tivoli Data Protection for NAS is a specialized
client that interfaces with the Network Data Management Protocol
(NDMP).  Full volume image backup/restore will be supported.  File level
support is announced for TSM version 5.1 - 1Q 2002.

John


At 08:16 AM 5/2/01 -0700, Gill, Geoffrey L. wrote:
Is anyone familiar with Network Appliance products and can TSM back them up
directly or any other way?
Thanks,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (888) 997-9614





Re: Reducing/compressing the database

2001-04-23 Thread Thomas A. La Porte

Is anybody else a little bit distressed that Tivoli's TSM
administrator needs to seek this advice from the list, rather
than from the developers of the product?

This is an issue that has been one that has generated much
discussion on and off with regards to the benefits and costs of
attempting to reorganize a TSM database, and one on which we have
never had much of a substantive response from anyone within
Tivoli, either in this forum or in the documentation of these
commands in the manuals.

I would have thought that the people at Tivoli would be the best
source for information of this type. I'm a little bit concerned
that our collective guessing and anecdotal evidence is all that
any of us as administrators have to go on.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Sun, 22 Apr 2001, Shawn Drew wrote:

TSM v3.7.3

I have read alot on this list about reducing the database because my
situation is pretty bad.  We have a 103 gig database that was 97% used!  I
finally was permitted to fix the outrageous retention settings, and got it
down to 50% utilization, but the Maximum Reduction value is still 0.  Now,
I
want to get the database's assigned capacity to the ibm recommended max
of
70 gigs. (This is from the ISSS last year in san diego. I was the guy that
had an 80gig db in one of the seminars) From reading this list, it seems I
have a few options, but could not determine the best route.  Down time IS a
factor for this.   The performance on this machine, for restores, is
dramatically slower than on other machines, and since it seems all else is
almost equal, I am assuming its the db size.
First of all, my reason for doing this is to get better performance on my
restores.

So,  will defragging the database really improve my restore times? seems
pointless otherwise.

It seems my options are:

- dumpdb/loaddb - I read some horror stories of this, and really hesitate
on
it.  Also, it seems the loaddb takes very long, from other people's
experience  (2 days for a 40gb db? I think I read?)

- unloaddb/loaddb - The only difference I can find with this and the
previous one is that it defrags the database.  And it seems that the loaddb
portion is the same, and is subject to the same unreliability and time
problems.  (This is the one described in the manuals to solve my situation)

- Richard recommended the backup db/restore db options over the
dumpdb/loaddb because it is more reliable and faster.  Does this also offer
the defrag benefit?  And how long would it take?

- Migrate all the client nodes one by one to other machines with
import/export.  Kill and rebuild TSM and the db and move everything back.
This seems like the one with least downtime. which may be the best option
and least risky.  But it will take a LONG time, and strain my other boxes.

thanks
shawn

___
Shawn Drew
Tivoli IT - ADSM/TSM Systems Administrator
[EMAIL PROTECTED]




Re: TDP for Oracle

2001-04-19 Thread Thomas A. La Porte

What about Linux? Is there plans for TDP for Oracle on Linux?
This would be extremely useful, especially since Linux is now a
tier one platform for Oracle.

On Thu, 19 Apr 2001, Thiha Than wrote:

hi Geoff,

We don't have it planned in the near future.

regards,
Thiha

Date:Wed, 18 Apr 2001 11:22:45 -0700
From:"Gill, Geoffrey L." [EMAIL PROTECTED]
Subject: TDP for Oracle

Does anyone know if TDP for Oracle on True64 is in the works, and if it is
a
projected release date?

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (888) 997-9614





Re: How to find out what tape's a client data are on ?

2001-03-01 Thread Thomas A. La Porte

Just out of curiosity, since I've only casually been following
this thread, was there a specific reason for the "group by"
clause?

I think what is desired is simplly an "order by" clause, e.g.

select volume_name, node_name -
  from volumeusage -
 where node_name = 'MEDRS1' -
   and stgpool_name = 'COPYPOOL' -
 order by volume_name, node_name

The group by function is necessary when there is a group function
in the "select" statement, e.g. when you are summing a value:

select node_name, sum(backup_copy_mb) -
  from auditocc -
 where node_name = 'MEDRS1' -
 group by node_name


 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Thu, 1 Mar 2001, Shekhar Dhotre wrote:


Thanks David ...

tsm: TSMselect volume_name,node_name from volumeusage where node_name='MEDRS1'
 and stgpool_name='COPYPOOL' group by volume_name,node_name

VOLUME_NAMENODE_NAME
-- --
MED200 MEDRS1
MED202 MEDRS1
MED251 MEDRS1
MED260 MEDRS1
MED261 MEDRS1
MED501 MEDRS1
MED507 MEDRS1
MED511 MEDRS1
MED513 MEDRS1

tsm: TSMselect volume_name,node_name from volumeusage where node_name='MEDRS1'
 and stgpool_name='TAPEPOOL' group by volume_name,node_name

VOLUME_NAMENODE_NAME
-- --
MED201 MEDRS1
MED255 MEDRS1
MED506 MEDRS1





David Longo [EMAIL PROTECTED]@VM.MARIST.EDU on 03/01/2001 04:28:55
PM

Please respond to "ADSM: Dist Stor Manager" [EMAIL PROTECTED]

Sent by:  "ADSM: Dist Stor Manager" [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:

Subject:  Re: How to find out what tape's a client data are on ?


Your select statement is incorrect.  It should be:

select volume_name,node_name from volumeusage where node_name='MEDRS1' and
stgpool_name='COPYTAPE' group by volume_name,node_name


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 03/01/01 02:48PM 

 The following query shows  same tapes  for  node 'MEDRS1' in tapepool and
copypool ?

  tsm: TSMselect volume_name,node_name from volumeusage where node_name
='MEDRS1'g
 roup by volume_name,node_name where stgpool_name='TAPEPOOL'

VOLUME_NAMENODE_NAME
-- --
MED200 MEDRS1
MED201 MEDRS1
MED251 MEDRS1
MED254 MEDRS1
MED255 MEDRS1
MED260 MEDRS1
MED261 MEDRS1
MED501 MEDRS1
MED506 MEDRS1
MED507 MEDRS1
MED511 MEDRS1
MED513 MEDRS1

tsm: TSMselect volume_name,node_name from volumeusage where node_name='MEDRS1'g
 roup by volume_name,node_name where stgpool_name='COPYPOOL'

VOLUME_NAMENODE_NAME
-- --
MED200 MEDRS1
MED201 MEDRS1
MED251 MEDRS1
MED254 MEDRS1
MED255 MEDRS1
MED260 MEDRS1
MED261 MEDRS1
MED501 MEDRS1
MED506 MEDRS1
MED507 MEDRS1
MED511 MEDRS1
MED513 MEDRS1

tsm: TSMQ DRM

Volume NameState   Last Update   Automated
   Date/Time LibName
   -   ---   
64 Vault   02/22/01   12:08:00
MED200 Vault   02/16/01   15:38:21
MED251 Vault   02/16/01   15:38:22
MED252 Vault   02/23/01   13:38:02
MED254 Vault   02/26/01   10:43:13
MED260 Vault   02/19/01   10:36:43
MED261 Vault   02/19/01   10:36:43
MED271 Vault   02/26/01   10:43:13
MED311 Vault   02/19/01   10:36:43
MED375 Vault   02/27/01   10:48:18
MED414 Vault   02/20/01   11:42:36
MED501 Vault   03/01/01   12:47:40
MED503 Vault   02/21/01   10:47:50
MED511 Vault   02/27/01   10:48:18
MED513 Vault   03/01/01   12:47:40
MED514 Vault   03/01/01   12:47:40
MED515 Vault   03/01/01   12:49:00
MED512 Vault   03/01/01   12:49:00
more...   (ENTER to continue, 'C' to cancel)

MED413 Vault   02/27/01   10:49:43
MED510 Vault   02/26/01   10:44:30
MED374 

TSM Pricing [was Re: Performance Large Files vs. Small Files]

2001-02-21 Thread Thomas A. La Porte

We, too, were a bit rudely awakened by this new pricing structure
when we purchased our upgrade to 4.1.

From my perspective, the most agonizing aspect of the structure
is the need to pay for a tape library access license (I forgot
what the "feature" is actually called) for every physical server
that accesses a tape library, rather than for each tape library.

We did the math, and it almost made more sense for us to purchase
a new S80 class machine that could effectively handle multiple
TSM instances, rather than purchase additional tape library
access licenses to continue to run on our two separate F50
machines. That begins to make very little sense when the price
point of a "feature" outweighs the cost of the hardware to
implement that feature. Particularly a feature that is so
fundamental to the running of the application: I can't imagine
trying to run a large-scale ADSM implementation without a 3494
(or similar) tape library.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 21 Feb 2001, bbullock wrote:

That is also an option that we have not considered. It actually
sounds like a good one. I'll bring it up to management.
The only problem I see in this is the $$$ needed to purchase a TSM
server license for every manufacturing host we try to back up.
I don't know about your shops, but with the newest "Point based"
pricing structure that Tivoli implemented back in November (I believe it was
about then). They are now wanting to charge you more $$$ to run the TSM
server on a multi-cpu unix host than on a single host NT box. In our shop
where we run TSM on beefy S80s, that means a price change that is
exponentially larger than what we have paid in the past for the same
functionality.


Ben Bullock
UNIX Systems Manager


 -Original Message-
 From: Suad Musovich [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 21, 2001 4:37 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Performance Large Files vs. Small Files


 On Tue, Feb 20, 2001 at 03:21:34PM -0700, bbullock wrote:
 ...
  How many files? Well, I have one Solaris-based host
 that generates
  500,000 new files a day in a deeply nested directory
 structure (about 10
  levels deep with only about 5 files per directory). Before
 I am asked, "no,
  they are not able to change the directory of file structure
 on the host. It
  runs proprietary applications that can't be altered". They
 are currently
  keeping these files on the host for about 30 days and then
 deleting them.
 
  I have no problem moving the files to TSM on a
 nightly basis, we
  have a nice big network pipe and the files are small. The
 problem is with
  the TSM database growth, and the number of files per
 filesystem (stored in
  TSM). Unfortunately, the directories are not shown when you
 do a 'q occ' on
  a node, so there is actually a "hidden" number of database
 entries that are
  taking up space in my TSM database that are not readily
 apparent when
  looking at the output of "q node".

 Why not put a TSM server on the Solaris box and back it up to
 one of the other
 servers as a virtual volume.
 It would redistribute the database to the Solaris host and
 the data is kept
 as a large object on the tape-attached TSM server.

 I also remember reading about grouping files together as a
 single object. I can't
 remember if it did selective groups of files or just whole
 filesystems.

 Cheers, Suad
 --






Re: Performance Large Files vs. Small Files

2001-02-14 Thread Thomas A. La Porte

Imagine it strictly from a database perspective.

Scenario 1: 15 files, 2GB each
Scenario 2: 15728640 files, 2KB each

In scenario one, your loop is essentially like this:

  numfiles = 15;
  for (i = 0; $i  $numfiles ; $i++) {
insert file characteristics into database;
request data be sent from client;
store data in storage pool;
  }

In scenario two, the primary difference is that numfiles =
15728640:

  numfiles = 15728640;
  for (i = 0; $i  $numfiles ; $i++) {
insert file characteristics into database;
request data be sent from client;
store data in storage pool;
  }


This means that, in the first scenario, there are 15 interactions
with the database, 15 system calls on the client for file
open/read operations, etc. In the second scenario, there are 15
*million* interactions with the database, 15 *million* file I/O
operations, etc.

Realistically, this is a bit of a simplification, as the
TXNGROUPMAX and the TXNBYTELIMIT parameters help to group the
files into transaction batches that can be larger than a single
file, which reduces the number of round trips to the database,
but the overall effect is still there.

Although you may be transferring the same amount of aggregate
data, you have to factor in the overhead of each single
transfer. Although the overhead may be small, if you multiply
that small number by two orders of magnitude you do generally end
up with a big number.

Imagine the time it would take to collect $30 million dollars
from fifteen $2 million donors, then think of collecting the same
amount of money from fifteen million $2 donors.

I would recommend that you break up your NT server into smaller
filespaces, either physically on the NT server, or logically with
virtualfilespaces on the ADSM server. That way you can have more
multiple processes working simultaneously on the backup. The
aggregate time it will take to back up the server will be the
same, but the wall clock time will be approximately divided by
the number of processes you can run simultaneously.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 14 Feb 2001, Diana J.Cline wrote:

Using an NT Client and an AIX Server

Does anyone have a TECHNICAL reason why I can backup 30GB of 2GB files that are
stored in one directory so much faster than 30GB of 2kb files that are stored
in a bunch of directories?

I know that this is the case, I just would like to find out why.  If the amount
of data is the same and the Network Data Transfer Rate is the same between the
two backups, why does it take the TSM server so much longer to process the
files being sent by the larger amount of files in multiple directories?

I sure would like to have the answer to this.  We are trying to complete an
incremental backup an NT Server with about 3 million small objects (according
to TSM) in many, many folders and it can't even get done in 12 hours.  The
actual amount of data transferred is only about 7GB per night.  We have other
backups that can complete 50GB in 5 hours but they are in one directory and the
# of files is smaller.

Thanks





 Network data transfer rate
 --
 The average rate at which the network transfers data between
 the TSM client and the TSM server, calculated by dividing the
 total number of bytes transferred by the time to transfer the
 data over the network. The time it takes for TSM to process
 objects is not included in the network transfer rate. Therefore,
 the network transfer rate is higher than the aggregate transfer
 rate.
.
 Aggregate data transfer rate
 
 The average rate at which TSM and the network transfer data
 between the TSM client and the TSM server, calculated by
 dividing the total number of bytes transferred by the time
 that elapses from the beginning to the end of the process.
 Both TSM processing and network time are included in the
 aggregate transfer rate. Therefore, the aggregate transfer
 rate is lower than the network transfer rate.





NDMP support?

2001-01-31 Thread Thomas A. La Porte

Is there any sense that TSM might support NDMP anytime soon? The
single reference to NDMP is at
http://www.tivoli.com/products/documents/whitepapers/storage_vision.html,
which is over a year old, yet contains frustrating present-tense
descriptions of production functionality that still doesn't
exist.

Just wondering if Tivoli is going to either update their vision,
or include any of the myriad functions discussed in this vision
paper in an actual product.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]



Re: Generting Commands from a Script

2001-01-24 Thread Thomas A. La Porte

select 'move data ' || volume_name -
  from volumes -
where upper(stgpool_name)='LNTAPECO' -
   or upper(stgpool_name)='LNTAPENO' -
   or upper(stgpool_name)='UNTAPECO' -
   or upper(stgpool_name)='UNTAPENO' -
   or upper(stgpool_name)='NTTAPECO' -
   or upper(stgpool_name)='NTTAPENO'
  and pct_utilized 50

You'll probably need to set sqldisplaymode to wide, or if you run
it in batch to commadelimited or tabdelimited. Redirect the
output to a file and you've got a macro file that you can run.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Wed, 24 Jan 2001, Joseph Marchesani wrote:

Group

I have created a script name VOLUME_PCT with the following results:

ANR1461I RUN: Executing command script VOLUME_PCT.
ANR1466I RUN: Command script VOLUME_PCT, Line 5 : select volume_name, 
stgpool_name,pct_utilized from volumes
where upper(stgpool_name)='LNTAPECO' and pct_utilized 50 or 
upper(stgpool_name)='LNTAPENO' and pct_utilized 50
 or upper(stgpool_name)='UNTAPECO' and pct_utilized 50 or 
upper(stgpool_name)='UNTAPENO' and pct_utilized 50
 or upper(stgpool_name)='NTTAPECO' and pct_utilized 50 or 
upper(stgpool_name)='NTTAPENO' and pct_utilized 50.

VOLUME_NAMESTGPOOL_NAME   PCT_UTILIZED
-- --   
A40004 NTTAPECO8.1
A40012 UNTAPENO2.0
A40015 NTTAPENO   48.0
A40021 NTTAPENO  32.9
A40081 NTTAPECO   47.4
A40156 NTTAPECO  15.4

ANR1494I RUN: Command return code is 0.
ANR1487I RUN: Command return code is RC_OK (OK).
ANR1462I RUN: Command script VOLUME_PCT completed successfully

My question is how can I generated MOVE DATA commands using the VOLUME_NAMEs in the 
results of the above SCRIPT

Thanks

Joe Marchesani .





Re: periodic ADSM/TSM shutdown

2000-10-12 Thread Thomas A. La Porte

We have never seen any issues with long running ADSM servers
(note we're further down level than you are!):

tsm: DLADSMq stat
ADSM Server for AIX-RS/6000 - Version 3, Release 1, Level 1.5


Server Name: DLADSM
  Server Installation Date/Time: 08/06/1996 17:18:36
   Server Restart Date/Time: 05/21/2000 20:58:39
[...]


Folks who recommend periodic reboots and restarts are offering
solutions that simply mask or alleviate any problems that may
exist.


On Thu, 12 Oct 2000, Pat Wilson wrote:

I've never seen any evidence that a periodic restart is needed.

Pat Wilson
Dartmouth College
[EMAIL PROTECTED]


  I am running
  AIX 4.3.3 on an S80
  ADSM 3.1.2.40 (yes I know I am trying to get us to upgrade but you ever see
  anything move fast in corporate world?)

  A couple of weeks ago I ran into a problem that the server couldn't
  successfully backup the database and my log file filled up...I solved it by
  cycling the ADSM server.  Then two days later BMC told us that we need to
  cycle the ADSM server on a regular basis.  What are everyone's thoughts on
  that?  I looked around and haven't seen any recommendations.

  TIA

  Becky Davidson
  Data Manager/AIX Administrator
  EDS/Earthgrains
  voice: 314-259-7589
  fax: 314-877-8589
  email: [EMAIL PROTECTED]





Re: H80 - TSM - 3494 --- Bottleneck?

2000-08-10 Thread Thomas A. La Porte

Ronnie,

Not sure I'm reading your message correctly, but my understanding
was that the 3494 can connect to a host either via Ethernet or
Serial cable, not via SCSI. The 3590 drives are attached to the
host via either SCSI, or in your case fiber.

Nevertheless, if your question is will the ethernet connection
from the host to the 3494 be a bottleneck, the answer is no. The
only  traffic that passes across that connection is robotic
instructions (e.g. mount tape A in drive 1, dismount tape B from
drive 2, etc.). The actual data transfer between host and tape
drive goes over the Fiber connection between the host and drive.

 -- Tom

Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]

On Thu, 10 Aug 2000, Ronnie Crowder wrote:

We are in the process of getting a new TSM server and library.  I was thinking
about how the hardware connects up to the 3494 and drives.  I the drives are
fiber connected to the rs6000 and the 3494 is scsi connected.  This is what my
question is:  Do all the clients go through the single ethernet on the 6000 to
get to the 3494 drives??  If so, is this a potential bottleneck and has anyone
had any issues with this setup??  About how many clients are connected??