Re: Defining TSM Disk pools on ESS wisdom wanted

2003-04-04 Thread Seay, Paul
Ever tried maxpgahead?  That is why you want JFS.  Causes things to fly.

-Original Message-
From: Emil S. Hansen [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 27, 2003 4:54 AM
To: [EMAIL PROTECTED]
Subject: Re: Defining TSM Disk pools on ESS wisdom wanted


On Thu, Mar 27, 2003 at 10:25:15AM +0100, Daniel Sparrman wrote:
> So, using 24GB volumes is probably the best way to go.  Also, JFS
> formatted volumes is the way to go.

Why would you prefer JFS volumes? I use raw and find it a bit better since
the VM want cache anything.
--
Best Regards
Emil S. Hansen - [EMAIL PROTECTED] - ESH14-DK
UNIX Administrator, Berlingske IT - www.bit.dk
PGP: 109375FA/ABEB 1EFA A764 529E 82B5  0943 AD3B 1FC2 1093 75FA

"Gravity can not be held responsible for people falling in love."
- Albert Einstein


Re: TSM Server not using maximum number of drives.

2003-04-04 Thread Seay, Paul
Collocation can also cause fewer tape drives to be used.

-Original Message-
From: Hart, Charles [mailto:[EMAIL PROTECTED]
Sent: Monday, March 31, 2003 11:30 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM Server not using maximum number of drives.


The Resource Utilization parameter in your client dsm.opt will set the
number of streams that the backup will use hence the number of drives.  So
if you only have two streams going directly to tape you would only uses 2
tape drives.

I think the Resource Utilization sets the multi-streaming based on file
systems meaning one file system one stream? 3 File Systems 3 streams?

Regards,

Charles

-Original Message-
From: Balasubramanian Krishnamurthy [mailto:[EMAIL PROTECTED]
Sent: Friday, March 28, 2003 7:32 PM
To: [EMAIL PROTECTED]
Subject: TSM Server not using maximum number of drives.


Hi,

I have a TSM Server (version 5.1.6.2) with a library having 6 drives. I have
defined Maximum mounts allowed for one of my clients to be 4. The Mount
Limit in DEVCLASS is set to DRIVES. However when this client backs up
(Direct to Tape) it uses only 2 Drives instead of 4, Even though there are
available Drives. There are enough Scratch tapes too.

I am not sure if I am missing any other parameter that is restricting the
backups to use only 2 drives instead of 4.

Thanks
Bala Krishnamurthy


Re: upgrade TDP for R3

2003-04-04 Thread Seay, Paul
Works very well.  There is a memory assertion error that may not be fixed in
this level, but that problem has always been there.

-Original Message-
From: Dale Gieseke [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 03, 2003 1:27 PM
To: [EMAIL PROTECTED]
Subject: upgrade TDP for R3


Greetings,
Am preparing to upgrade TDP for R3 (oracle) from V3.2.0.6 to V3.2.0.13. This
level became available on March 5, 2003.

Is anyone using this level of TDP for R3? Does it have any known problems?

Thanks in advance!

Dale Gieseke
The Toro Company


Re: Tivoli Storage Resource Reporting

2003-03-04 Thread Seay, Paul
I have seen this product demonstrated, pretty impressive.

-Original Message-
From: Chris Young [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 04, 2003 7:20 AM
To: [EMAIL PROTECTED]
Subject: Re: Tivoli Storage Resource Reporting


While I couldn't comment in any great deal on this, I have seen the Tivoli
Storage Resource Manager product (formerly the TrelliSoft product) in
action. It appears to provide detailed reports on data that is stored on
both SAN and LAN based disk volumes (assuming of course that you have a
server that either has the volume mounted locally or there is a server
locally (and direct) attached to the volume on the other end of the
network). It is also fairly easy to install and get started with in terms of
gathering information. Understanding all of the reports that are available
is a different story.

It is probably worth checking out though. I have heard rumors that this will
become even more integrated with the TSM product in the future.

- Chris Young

-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]
Sent: Friday, February 28, 2003 4:33 PM
To: [EMAIL PROTECTED]
Subject: Tivoli Storage Resource Reporting


Hi, all

I'm runnig a TSM 4.2 for LAN servers and another TSM 4.2 for SAN servers. I
heard Tivoli has a good product about disk storage reporting from LAN and
SAN. Anyone using any good reporting and managing software from Tivoli? My
main interest is all disk utilization from LAN and SAN attached servers. Any
comment will be appreciated. Thanks.







Jin Bae Chi (Gus)
System Admin/Tivoli
Data Center, CSCC
614-287-5270
614-287-5488 Fax
[EMAIL PROTECTED]


Re: TSM API Crashing DB2

2003-03-04 Thread Seay, Paul
I believe at the 4.2.1 level you have to get a special version of the client
to run on AIX V5.1.

ftp://service.boulder.ibm.com/storage/tivoli-storage-management/patches/clie
nt/v4r2/AIX/v423/AIX51/

-Original Message-
From: Jeff Rogers [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 04, 2003 8:58 PM
To: [EMAIL PROTECTED]
Subject: TSM API Crashing DB2


This is the environment.

AIX 5.1
TSM Client and API 4.2.11
DB2 7.2 FixPack 4

ba client can do incrementals and archives just fine.

Setup all the db2 variables, setup the dsm.sys and opt for the API - we have
other DB2 servers on AIX 4.3.3, same TSM and DB2 versions working just fine.

Send a backup of a small database from DB2 and get the following.

db2 => backup db databasename online use tsm
SQL1224N  A database agent could not be started to service a request, or was
terminated as a result of a database system shutdown or a force command.
SQLSTATE=55032

This is a brand new system, so there could be some stupidity in the DB2
setup, missing OS patches, etc.  In any event, TSM actually CRASHING db2 is
a really scary problem

Before I dig, I wanted to see if anybody else has seen this one.

-JeffR


Re: Errors on 3590 K tapes

2003-03-04 Thread Seay, Paul
This problem was fixed in the F_0295 Level (I think that is right).  It is
the latest level.

-Original Message-
From: Zoltan Forray/AC/VCU [mailto:[EMAIL PROTECTED]
Sent: Monday, March 03, 2003 3:45 PM
To: [EMAIL PROTECTED]
Subject: Re: Errors on 3590 K tapes


Sounds a lot like what I was seeing when we got our "new to us" 3590E1A
drives.

The problem was the microcode level. It was way behind and caused all kinds
of grief. 2/3 of our new tapes were unlabelable. The drive would load the
tape and spin for ever until it timed out.

Once the microcode/firmware was upgraded, problems went away. Went back and
relabeled tapes that previously were unlabelable.





Thomas Denier <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 03/03/2003 03:30
PM Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Errors on 3590 K tapes


We are running a 4.2.3.2 TSM server under OS/390. We have four 3590 tape
drives that were and are working well with 3590 J tapes (the ones with 20 GB
capacity without compression). We are in the process of migrating our onsite
storage pools to 3590 K tapes (the ones with 40 GB capacity without
compression). So far we have had seven of the new tapes forced read-only
because of I/O errors. We have one full 3590 K tape, and seventeen in
'FILLING' status that have not so far had any I/O errors. The errors have
been spread across at least three of the tape drives. In most cases, there
is an OS/390 message like the following:

IOS071I 0C62,1D,ADSM, MISSING CHANNEL AND DEVICE END

and a TSM message like the following:

ANR5351E Error reading BlockID on device 0C62.

Do 3590 K tapes typically suffer the kind of infant mortality rate we are
seeing?


Re: Errors on 3590 K tapes

2003-03-04 Thread Seay, Paul
This is a problem for all the newer tape technologies.  Typically, this is
caused by a bad batch of tapes or a case of tapes that has been slammed on
the floor or the librarian stacked them and knocked them on the floor.

Early in the delivery of K tapes there was a packing problem.

Also, make sure you have the latest microcode on your drives.  If you are
running downlevel, you can get errors during certain conditions.  I cannot
remember what the MIH has to be set for or whether you can take the
defaults.  The deal is the tape is much longer and can take a long time to
rewind.

There is a document from IMATION on this subject.  I suggest you get that
document.

-Original Message-
From: Thomas Denier [mailto:[EMAIL PROTECTED]
Sent: Monday, March 03, 2003 3:30 PM
To: [EMAIL PROTECTED]
Subject: Errors on 3590 K tapes


We are running a 4.2.3.2 TSM server under OS/390. We have four 3590 tape
drives that were and are working well with 3590 J tapes (the ones with 20 GB
capacity without compression). We are in the process of migrating our onsite
storage pools to 3590 K tapes (the ones with 40 GB capacity without
compression). So far we have had seven of the new tapes forced read-only
because of I/O errors. We have one full 3590 K tape, and seventeen in
'FILLING' status that have not so far had any I/O errors. The errors have
been spread across at least three of the tape drives. In most cases, there
is an OS/390 message like the following:

IOS071I 0C62,1D,ADSM, MISSING CHANNEL AND DEVICE END

and a TSM message like the following:

ANR5351E Error reading BlockID on device 0C62.

Do 3590 K tapes typically suffer the kind of infant mortality rate we are
seeing?


Re: Need help with TDP Oracle 5.1.5.

2003-03-02 Thread Seay, Paul
I do not think the first two require the library.  Have you check the rights
on libobk.a?  It may be that world rights are being used because libobk.a is
owned by someone other than who's requesting it.  I bet world rights are not
executable.

-Original Message-
From: Guy Korn [mailto:[EMAIL PROTECTED]
Sent: Sunday, March 02, 2003 6:13 AM
To: [EMAIL PROTECTED]
Subject: Need help with TDP Oracle 5.1.5.


Hi list,

We have a problem configurating TDPO 5.1.5 with RAC 9.2. libobk.a 64/32bit
is failing to load

Our Env.
AIX 5.1 32bit kernel with 64bit application support.
HACMP 4.4.1 ES CRM.
TSM srv 5.1.5 (intel base)
TDPO 5.1.5 64bit & 2.2.1 32bit
API 5.1.5 64bit
BA 5.1.5 32bit

BA client work o.k.
"tdpoconf showenv" & "tdpconf passw" work o.k.
"sbttest" is failing - /usr/tivoli/tsm/client/oracle/bin/libobk.a could not
be loaded RMAN script is failing:
---
RMAN> run
2> {
3> allocate channel c1 type sbt
4> parms='SBT_LIBRARY=/home/oracle/product/9.2/lib/libobk.a,
5> ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
6> backup tablespace "test";
7> }
8>
RMAN-00571: ===
RMAN-00569: === ERROR MESSAGE STACK FOLLOWS ===
RMAN-00571: ===
RMAN-03009: failure of allocate command on c1 channel at 03/02/2003 13:13:19
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27211: Failed to load Media Management Library
Additional information: 8
Recovery Manager complete.

When try Oracel's API every thing is o.k.:
--
RMAN> run {
2> allocate channel c1 type sbt format '%U'
3> parms = 'SBT_LIBRARY=oracle.disksbt,
3> ENV=(BACKUP_DIR=/home/oracle/product/RCA
T/rman)';
4> backup tablespace "test";
5> }
6>
allocated channel: c1
channel c1: sid=18 devtype=SBT_TAPE
channel c1: WARNING: Oracle Test Disk API
Starting backup at 02-MAR-03
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00022 name=/dev/rmb_test
channel c1: starting piece 1 at 02-MAR-03
channel c1: finished piece 1 at 02-MAR-03
piece handle=02egvvnb_1_1 comment=API Version 2.0,MMS Version 8.1.3.0
channel c1: backup set complete, elapsed time: 00:00:02 Finished backup at
02-MAR-03 released channel: c1 Recovery Manager complete.


Re: TDP for SAP, Solaris 8 Environment, Restore Failures and Conf igur ation Questions

2003-03-02 Thread Seay, Paul
We found the problem, but have not had time to reproduce it for Germany.
Back off to the 32 bit API and TDP.  Everything will work just fine then.

-Original Message-
From: Bruce Lowrie [mailto:[EMAIL PROTECTED]
Sent: Friday, February 28, 2003 1:42 PM
To: [EMAIL PROTECTED]
Subject: TDP for SAP, Solaris 8 Environment, Restore Failures and Configur
ation Questions


We are experiencing and interesting problem on our TDP for R3 restores. We
have an old E4000 being used as a development system. We are at Solaris 8
patch 17. TIVsmCapi - 4.2.1 TIVsmCba - 4.2.1 and TDP/R3ora64 - 3.2.0.11. We
back up fine. On restore, things start out good. We use netstat to monitor
the number of packets moving into the E4000 from the TSM server. Runs along
moving 8-10,000 packets every 5 seconds. Then it just drops off to 400
packets every 5 seconds. It stays there. We have found that if we disable 5
of the six processors using the psradm command, it will kick the process
back up. We then put the processors back on line and continues at 8-10,
packets again. Maybe 15 minutes later we have to repeat the process again
because it drops back to 400. Paul Seay had a similar  problem about a year
ago, he stated that: "What we have been able to determine is that the tape
drives run so fast on the restore they apparently overrun the TCP/IP buffers
used to transfer data from the tapes to the multiple threads necessary to
support the restore. The restore fails every time unless the number of
drives and sessions are set to 1 during the restore. " Paul if your
listening how do you know that the TCP/IP buffers are being saturated? I
would like to check that as well. Thanks.


Bruce E. Lowrie
Sr. Systems Analyst
Information Technology Services
Storage, Output, Legacy
*E-Mail: [EMAIL PROTECTED]
*Voice: (989) 496-6404
7 Fax: (989) 496-6437
*Post: 2200 W. Salzburg Rd.
*Post: Mail: CO2111
*Post: Midland, MI 48686-0994
This e-mail transmission and any files that accompany it may contain
sensitive information belonging to the sender. The information is intended
only for the use of the individual or entity named. If you are not the
intended recipient, you are hereby notified that any disclosure, copying,
distribution, or the taking of any action in reliance on the contents of
this information is strictly prohibited. Dow Corning's practice statement
for digitally signed messages may be found at
http://www.dowcorning.com/dcps. If you have received this e-mail
transmission in error, please immediately notify the Security Administrator
at 



This email has been scanned for all viruses by the MessageLabs SkyScan
service. For more information on a proactive anti-virus service working
around the clock, around the globe, visit http://www.messagelabs.com



Re: Fibre Channel Loop Disk and Tape device

2003-03-02 Thread Seay, Paul
This is not a TSM issue it is a FC issue.  Are you running your tape on the
same HBAs as the Disk containing the Oracle database.  This is a no-no if
you are running random I/O to the Oracle database at the same time.  You
will need dedicated HBAs for the tape from the Oracle database client if
that is the case.

-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 27, 2003 4:39 PM
To: [EMAIL PROTECTED]
Subject: Fibre Channel Loop Disk and Tape device


Hi, experts,

We run 2 TSM 4.2 on AIX 4.3.3; clients are AIX, Netware, Win2K;

We plan to buy 3584 LTO2 FC tape drives and FastT700 FC 2GB disk drives. I
heard that they can be directly connected to 2109 SAN switches. but when I
try to do LAN-free backup, which will be sequential I/O from each AIX node,
any random I/O for disk will be timing out and give error message and then,
Oracle will crash.

Some recommended to have an additional HBA on each server for backup only,
through proper zoning on switch. but this will cost too much for me. Talking
about pure fibre connection between host and disk, tapes, LAN free concept
wouldn't have much meaning (to me). TSM should have some options for backup
strategies for SAN servers and devices. I can' t find one so far.

As most of SAN disk storage offer 'flash copy' options, should I redesign
TSM env. to use this option or wait for server-free backup from TSM for AIX
platform? Any suggestion will be appreciated.




Jin Bae Chi (Gus)
System Admin/Tivoli
Data Center, CSCC
614-287-5270
614-287-5488 Fax
[EMAIL PROTECTED]


Re: too many filesystems to backup

2003-02-20 Thread Seay, Paul
Have you considered using the dsm.opt to contain exclude.fs commands for the
ones you do not want to backup and let it just backup everything else.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Jitendra Thakur [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 20, 2003 5:42 AM
To: [EMAIL PROTECTED]
Subject: too many filesystems to backup


Dear *SMites

I have an AIX system which has 65 filesystems in total and I want to backup
27 filesystems out of these. Is there any other way apart from specifying
each one of them (there again we have limitaion of max 20 objects in a
single schedule/command). Anything more intelligent (???) is welcome...

Thanks in advance.

Jitendra Thakur
+91-9811090511


DISCLAIMER:
The information in the message is confidential and may be legally
privileged. It is intended solely for the addressee. Acess to this message
by anyone else is unauthorised. If you are not the intended recipent, any
discloser, copying, or distribution of the message, or nay action or
omission taken by you in reliance on it is prohibted and may be unlawful.
Please immediately contact the sender if you have recieved this message in
error.Thank you.



Re: [new] DB Cache Hit Rate Question

2003-02-20 Thread Seay, Paul
If this is a dedicated AIX machine set vmtune maxperm to 20 and minperm to
10 otherwise paging will start happening.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: David le Blanc [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 10:07 PM
To: [EMAIL PROTECTED]
Subject: [new] DB Cache Hit Rate Question


Hi Guys!

In following this thread, I thought I'd check out customer experience with a
stock TSM server they have.

q db f=d  showed the DB hit rate at 85%

q opt showed the bufpoolsize at 2350

This was with auto tuning.

After disabling auto-bufpool tuning, I increased the bufpoolsize to 131072
(on a 2G system also running one other application) and notices the hit rate
climb to 99.2% over the next two days.

VMstat shows 'avm' of around 26 pages, or just over
a GIG.  I therefore _assume_ the remainder constitutes buffer-cache.

vmstat showed absolutely no paging for the period I monitored (about 6 hours
during a working day), so would it be safe to further increase the
bufpoolsize?


Also, the customer noticed a TSM db backup which took 35 minutes two days
ago, now completes in 12 minutes. Could that indicate a problem?

TIA
Dave



Re: password encryption

2003-02-20 Thread Seay, Paul
In encryption speak.  The node name is usually called the public key.  The
private key is what is used to encrypt the message.  This is a nice
implementation because during password change (which is probably in the
message) the new encyption key (password) is not exposed.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Andrew Raibeck [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 8:02 PM
To: [EMAIL PROTECTED]
Subject: Re: password encryption


To clarify my earlier response on this:

The (encrypted) password is not actually sent between client and server,
except when the password is being changed. During authentication, the client
sends the server a message that is encrypted using the password as the key.
The server knows what the decrypted message should be, so if the wrong
password was used to encrypt the message, then the authentication will fail.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.




Andrew Raibeck/Tucson/IBM@IBMUS
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 02/19/2003 14:56
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: password encryption



The password is indeed encrypted.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.




"Prather, Wanda" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 02/19/2003 14:40
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: password encryption



I've always been told that the password is NOT sent in plain text, it's
encrypted. (but I've never had a sniffer to check it myself).

-Original Message-
From: Eliza Lau [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 10:36 AM
To: [EMAIL PROTECTED]
Subject: password encryption


Does anyone know how the stored password on the client machine is passed to
the server for authentication?

The user has 'password generate' in his dsm.opt.  The password is stored in
the Registry of his Windows 2000 client.  When the TSM client starts is the
password sent to the server in plain text or encrypted?

Thanks,
Eliza Lau
Virginia Tech Computing Center
1700 Pratt Drive
Blacksburg, VA 24060



Re: TSM Upgrade frequency: SAN upgrade 4.2.1.7 to 4.2.1.9

2003-02-20 Thread Seay, Paul
I cannot remember now, but I believe this is one upgrade that absolutely
required both to go at the same time.  I am thinking it is 4.2.1.8 that
required all to be updated.  Prior to 4.2.2.0 they did not check the patch
level, but since it caused so much grief for support it is now enforced.
Believe me I know.  Level 2 sent me 4.2.2.13 with a patch on top to run on
my server and my SAs were at 4.2.2.12, every one of the puked and had to be
recycled after reverting back to 4.2.2.12.  They have a real nice message
that comes out now to tell you they are not at the same level.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Eliza Lau [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 10:51 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM Upgrade frequency


Todd,

>From talking to a TSM Level 2 consultant, only the first 3 numbers of a
release have to match between the server and the storage agent.  If all your
storage agents are at 4.2.1.7, upgrading the server to 4.2.1.9 should be ok.

Eliza Lau
Virginia Tech Computing Center


>
> For you folks that patch your TSM server frequently (like the
> individual from Oxford University who upgraded from some version to
> 5.1.6.0 on Feb3, then to 5.1.6.1 on Feb7, and now planning to upgrade
> to 5.1.6.2 within a couple weeks), how many of you have Storage Agent
> nodes? I am still on 4.2.1.7 simply because of the Storage Agent
> issue.  We have a problem with TSM server crashes that is supposed to
> be fixed in 4.2.1.9+, but I find it hard to justify upgrading to
> 4.2.1.9+ when I know I will have to upgrade to 5.x soon after.
> According to the documentation I have read, the TSM Server version and
> the Storage Agent version must be at the same level. We have several
> Storage Agent nodes, and all of them are considered 24x7 servers,
> mission critical.  It is like pulling teeth to get downtime for any
> one of them just for normal OS or Application type maint.  I dread the
> time where I have to take them all down at once.  Even more so, I
> dread having to take all of them down at the same time several times a
> month just to keep updating TSM Storage Agent versions because the TSM
> Server needed a patch. How do you folks with several mission critical,
> 24x7 servers go about doing TSM Server upgrades and TSM Storage Agent
> upgrades?  Especially considering that TSM Server outages are best
> done during the day when no backups are occurring, and TSM Client Node
> outages are done during the night to reduce impact on the users of
> those Client Nodes.
>



Re: TSM Upgrade frequency: Storage Agent Requirement

2003-02-19 Thread Seay, Paul
Not Quite!  This used to be true before 4.2.2.0, not any longer.  They must
match right down to the patch level.  I was bitten by 4.2.2.12 and 4.2.2.13.
However, TSM Development is promising to make this issue go away.
Hopefully, in V5.2.  You will have to upgrade all at once then, probably,
but after that we pray not.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Eliza Lau [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 19, 2003 10:51 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM Upgrade frequency


Todd,

>From talking to a TSM Level 2 consultant, only the first 3 numbers of a
release have to match between the server and the storage agent.  If all your
storage agents are at 4.2.1.7, upgrading the server to 4.2.1.9 should be ok.

Eliza Lau
Virginia Tech Computing Center


>
> For you folks that patch your TSM server frequently (like the
> individual from Oxford University who upgraded from some version to
> 5.1.6.0 on Feb3, then to 5.1.6.1 on Feb7, and now planning to upgrade
> to 5.1.6.2 within a couple weeks), how many of you have Storage Agent
> nodes? I am still on 4.2.1.7 simply because of the Storage Agent
> issue.  We have a problem with TSM server crashes that is supposed to
> be fixed in 4.2.1.9+, but I find it hard to justify upgrading to
> 4.2.1.9+ when I know I will have to upgrade to 5.x soon after.
> According to the documentation I have read, the TSM Server version and
> the Storage Agent version must be at the same level. We have several
> Storage Agent nodes, and all of them are considered 24x7 servers,
> mission critical.  It is like pulling teeth to get downtime for any
> one of them just for normal OS or Application type maint.  I dread the
> time where I have to take them all down at once.  Even more so, I
> dread having to take all of them down at the same time several times a
> month just to keep updating TSM Storage Agent versions because the TSM
> Server needed a patch. How do you folks with several mission critical,
> 24x7 servers go about doing TSM Server upgrades and TSM Storage Agent
> upgrades?  Especially considering that TSM Server outages are best
> done during the day when no backups are occurring, and TSM Client Node
> outages are done during the night to reduce impact on the users of
> those Client Nodes.
>



Re: Mixing 3590E and 3590K volumes

2003-02-18 Thread Seay, Paul
If your 3590 drives have the green 2x sticker on the back of them (open the
cabinet and look), they can support K cartridges.  3590H drives can support
K-Carts.  The K carts have a different knotch configuration than the J
cartridges.  There is no issue with having a mixed bag except it throws off
your estimated capacity numbers.

The K cartridges also require Turtle cases to ship them if you intend to use
certified cases.  They are a lot more sensitive to being damaged.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Fred Johanson [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 18, 2003 2:50 PM
To: [EMAIL PROTECTED]
Subject: Mixing 3590E and 3590K volumes


My 3494 is filled with 3590E carts.  If I were to use 3590K carts, my robot
wouldn't be so filled.  This is the current definition of the DEVC TAPE:

>   Device Class Name: TAPE
>  Device Access Strategy: Sequential
>  Storage Pool Count: 8
> Device Type: 3590
>  Format: DRIVE
>   Est/Max Capacity (MB):
> Mount Limit: 7
>Mount Wait (min): 60
>   Mount Retention (min): 2

If I were to start introducing the longer format carts into my system, would
TSM use them interchangeably with the old ones, i.e., would reclamation
write 2 Es on a K and 1/2 a K on an E, or would I have to have two separate
device classes?

TIA



Re: OS390 SELFTUNEBUFPOOLSIZE yes/no?

2003-02-17 Thread Seay, Paul
I like to set it about 10% higher than the happy point and set it to yes on
AIX.  Then check it every once and a while to see if it needs adjusting.
The problem is you can create a lot of GETMAINs on MVS if you do not set it
high enough to begin with.  On MVS, you are probably best to set it to NO so
that you do not get unpredictable memory usage on the machine.  This is the
difference between using a dedicated machine for TSM and a general use
machine and needing to share with other workloads.  You have to tune TSM on
MVS like you would a TP monitor like CICS, IMS, etc.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: MC Matt Cooper (2838) [mailto:[EMAIL PROTECTED]]
Sent: Monday, February 17, 2003 9:28 AM
To: [EMAIL PROTECTED]
Subject: OS390 SELFTUNEBUFPOOLSIZE yes/no?


Hello all,
I have been reading the dialog on OS/390 performance tuning.  I too
have found that lowering the size of the address space to 512MB has helped.
I have also seen improvements in my throughput by cycling TSM.  (I just
don't do it as often.)   One thing that I was wondering is if anyone has
done any research on an advantage to NOT USING SELFTUNBUFPOOLSIZE.  Right
now I set BUFPOOLSIZE to 32760 and SELFTUNEBUFPOOLSIZE yes.   From the looks
of things TSM seems to be able to cause some thrashing with MVS memory
management.  SO I wonder if SELFTUNEBUFPOOLSIZE should be set to NO and just
allocate a bigger fixed BUFPOOLSIZE, (like the 128M that was suggested)?
Matt



Re: Estimating size of backup sets

2003-02-16 Thread Seay, Paul
This will give you the filespace usage, which is the active backupset size
except it includes excluded data. It should be good enough for an estimate.

select node_name, cast(sum(capacity*pct_util/100) as decimal(10,3)) as "MB
of Used" from filespaces group by node_name

However, I am not sure how you are going to get this done.  Are you creating
these for faster recovery from the server or for standalone on a CD?  If it
is from the server, you may think about collocation by node.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Halvorsen Geirr Gulbrand [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 14, 2003 5:02 AM
To: [EMAIL PROTECTED]
Subject: Re: Estimating size of backup sets


I agree with John, this sounds very ambitious.
If you are only interested in active files, there is the possibility of
checking it on the actual client, how much data in each filespace
(filesystem/drive/etc). A worst case (it gives all backup-data, active and
inactive) could be "q occ t=b"

Rgds,
Geirr G. Halvorsen

-Original Message-
From: John Naylor [mailto:[EMAIL PROTECTED]]
Sent: 14. februar 2003 10:58
To: [EMAIL PROTECTED]
Subject: Re: Estimating size of backup sets


Graham,
Your plan is extremely ambitious.
Is this a one off or are are you planning to do this regularly. How are you
goung to achieve this. Anyway you can get a reasonable idea from taking the
filepace occupancy, unless you have major excludes from backup, this is
going to be your active file size. John




Graham Trigge <[EMAIL PROTECTED]> on 02/13/2003
10:35:24 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:(bcc: John Naylor/HAV/SSE)
Subject:  Estimating size of backup sets



TSMers,

Is there anyway people know of to estimate how large a backupset is going to
be? I have close to 700 nodes I want to generate backupsets for and I want
to know what sort of space I will be using up with them. I am assuming it
will be some sort of SQL statement looking at active files, but I can't
figure it out past that.

Any help would be appreciated.

Regards,


--

Graham Trigge
IT Technical Specialist
Server Support
Telstra Australia

Phone: (02) 9882 5831
Fax:  (02) 9882 5993
Mobile: 0409 654 434




The information contained in this e-mail and any attachments to it:
(a) may be confidential and if you are not the intended recipient, any
interference with, use, disclosure or copying of this material is
unauthorised and prohibited. If you have received this e-mail in error,
please delete it and notify the sender;
(b) may contain personal information of the sender as defined under the
Privacy Act 1988 (Cth).  Consent is hereby given to the recipient(s) to
collect, hold and use such information for any reasonable purpose in the
ordinary course of TES' business, such as responding to this email, or
forwarding or disclosing it to a third party.








**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern Energy
plc. It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be taken
in reliance on it, is prohibited and may be unlawful. Any unauthorised
recipient should advise the sender immediately of the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**



Re: TSM 5.2 Question (BMR Again)

2003-02-13 Thread Seay, Paul
No, but there will probably be a V5.2 pre-announcement update at Share in
Dallas February 24-28.

BMR has been mentioned as part of their intended product plan, but no
commitments on when.  The issue is Veritas BMR is fine for like
configurations, but not disaster recovery where the configurations may be
very dissimilar in the windows environment.  AIX already has BMR with
mksysb, Solaris with jumpstart, and HP-UX with Ignite.  There are other
products that are much better for windows dissimilar restores coming to
market.  The problem is nobody wants to pay for it when they think it should
be built into Windows.  The other piece is reliable disk hardware and
corporate images are becoming so commonplace now, the only possible use for
a BMR solution is in a DR scenario and the DR vendors adamantly oppose BMR
right now because it is unreliable in a DR, very dissimilar environment.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Gerhard Rentschler [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 13, 2003 7:44 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM 5.2 Question


Has TSM 5.2 allready been announced? I can't find any info on the TSM
website. Gerhard

---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany



> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf
> Of Joshua S. Bassi
> Sent: Wednesday, February 12, 2003 6:20 PM
> To: [EMAIL PROTECTED]
> Subject: Re: TSM 5.2 Question
>
>
> I doubt IBM will incorporate BMR features into ITSM until "the cow
> jumps over the moon."
>
> As far as purchasing Veritas BMR, Veritas has announced that they will
> not support future TSM versions and that all future development will
> be done exclusively with NetBackup.  Sorry, the buck stops here.
>
>
> --
> Joshua S. Bassi
> IBM Certified - AIX 4/5L, SAN, Shark
> Tivoli Certified Consultant -ADSM/TSM
> eServer Systems Expert -pSeries HACMP
>
> AIX, HACMP, Storage, TSM Consultant
> Cell (831) 595-3962
> [EMAIL PROTECTED]
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf
> Of Dearman, Richard
> Sent: Wednesday, February 12, 2003 9:05 AM
> To: [EMAIL PROTECTED]
> Subject: TSM 5.2 Question
>
>
> Does anyone know if TSM 5.2 will have any BMR capabilities build into
> it for unix and/or windows clients.  Or should I assume that it will
> never happen and purchase Baremetal Restore from veritas.
>
>
>
> Thanks
>
> ***EMAIL DISCLAIMER***
> This email and any files transmitted with it may be confidential and
> are intended solely for the use of th individual or entity to whom
> they are addressed. If you are not the intended recipient or the
> individual responsible for delivering the e-mail to the intended
> recipient, any disclosure, copying, distribution or any action taken
> or omitted to be taken in reliance on it, is strictly prohibited.  If
> you have received this e-mail in error, please delete it and notify
> the sender or contact Health Information Management 312.996.3941.



Re: summary-table

2003-02-11 Thread Seay, Paul
This is why it took soo long for them to actually figure out what the
problem was.  As it turns out, if the session did not exceed the coded
server timeout the problem did not show up on any release of the client or
the server.  It is when it went over.  In order to fix the problem, the
client and the server both must be updated if any of your sessions run over
the session timeout.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Monday, February 10, 2003 10:14 AM
To: [EMAIL PROTECTED]
Subject: Re: summary-table


> It's stuff like this that makes my shake my head in disbelief...
>
> The summary table was working fine will ALL my clients, regardless of
> version, when the server was 4.1.5.  After upgrading ONLY the server
> to 5.1.5. the byte counts in the summary table are Zeroes.

When I called support on this they originally told me I had to upgrade my
clients to fix it. Once I got a SQL statement that would show the TSM
version and amount backed up from Paul, I discovered that they had to be
full of it with their suggestion. What I discovered was that not only were
some versions coming back with data, other nodes with the same version were
coming back with zero's. To throw things off even further even other nodes
with the same version did not even show up on the list.

I too did not have a problem till I upgraded my server, at last count that
was ONE computer. So how is it that I now have to upgrade ALL 160+clients,
+1 server to fix it?


Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Re: Firmware upgrade question

2003-02-11 Thread Seay, Paul
None of these.  Install the latest one (D0IF_295) that fixes some nasty
problems especially if you are using ES-1000s.  D0IF_26E regresses a fix
that the previous firware had for ES-1000s.   There are also a lot fixes
related to bad tapes.  There is a change to improve cleaning cycle needs and
to FINALLY handle the LOCATING error when a tape is damaged on the
beginning, especially new ones.  We are putting this on in a few days.  It
is the generally available release to a patch level we needed to reintroduce
the fix for ES-1000s.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Jager Frederic (BIL) [mailto:[EMAIL PROTECTED]] 
Sent: Saturday, February 08, 2003 4:04 AM
To: [EMAIL PROTECTED]
Subject: Firmware upgrade question


Hi,

Having a shared 3494 library with 11 3590E1A drives (7 SCSI + 4 fibre), we
were told by the CE to upgrade the firmware. We currently use DOIE_350 on
all drives. What would you recommend to install as a mandatory firmware
level (DOIF_26E, DOIA_572 or DOIB_91D) ?

NB : Library manager code level :526.05

Thanks for your answer,
Regards,
---
JAGER Frédéric



-

An electronic message is not binding on its sender.  
Any message referring to a binding engagement must be confirmed in writing
and duly signed.
-



Re: Occupancy from copy_pool > primary_pool ??

2003-02-10 Thread Seay, Paul
I think any aggregates that have deleted files in them are still counted as
whole.  So, if reclamation ever runs against the primary and any copy pools
it is highly likely some aggregates in one pool would be reclaimed and in
others they may not.  The other possibility is you have more than one
primary pool which has had a backup stgpool command to the copy pool.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: PAC Brion Arnaud [mailto:[EMAIL PROTECTED]]
Sent: Monday, February 10, 2003 12:25 PM
To: [EMAIL PROTECTED]
Subject: Occupancy from copy_pool > primary_pool ??


Hi* SM'ers,

Could anybody explain me how it is possible that following query : "select
sum (physical_mb), sum (logical_mb), stgpool_name from occupancy group by
stgpool_name" shows me copy-pools occupation bigger than primary pools ? I
first thought it could be related to space reclamation process, but when I
ran that query all reclamation processes where allready finished (same
thresholds for both primary and copy pools), so I'm stuck :-( Anybody
willing to shed  some light on me ? Thanks in advance.

Arnaud

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



Re: tape cartridges with same bar codes

2003-02-09 Thread Seay, Paul
I suspect that you cannot even check both tapes into the library, though I
have never done it.  This is why.  When you check a volume in as scratch it
checks the volumes table to make sure the volume is not in the table.  This
is to protect the volume contents from being destroyed (referential
integrity).  Before or after this check, do not know which, the checkin is
going to fail (private or scratch) because one of the indexes on the
libvolumes table is volume_name.  A duplicate volume name will occur and the
tape checkin will fail.

This is the beauty of TSM's referential integrity.  It protects us from
ourselves.

I have not actually done this, but I have a US dollar that says the second
checkin/label command will fail.

You will be able to insert both tapes into both libraries so long as you are
not doing the auto checkin/label function of V5.1.

Remember once a tape is in the volumes table it has a device class
associated with it.  So, you cannot simply move a tape from one library to
another.  There is no way to update the device class field in the stgpool
table.  This is a feature I am sure many customers would like to have when
they are balancing the usage of multiple libraries.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Ki-Hwan Kwon [mailto:[EMAIL PROTECTED]]
Sent: Sunday, February 09, 2003 7:55 PM
To: [EMAIL PROTECTED]
Subject: tape cartridges with same bar codes


Hello,
To simplify my question,
assume that there are two tape cartridges with
same bar codes.
The tapes are extended high performance tapes.
We have two IBM 3494 tape libraries
(I call them as 3494A and 3494B each).
If I put one tape into 3494A dnd if I put the other tape into 3494B, will
this cause a problem to ADSM server due to same bar codes? I have ADSM
installed on an RS/6000 unix server which manages the two libraries. I am a
newbie in ADSM and someone says this will cause a problem but I don't
understand why. Since they are in physically different library, there should
be a way to distinguish the two tapes with same bar codes. Any comments will
be much appreciated. thanks.



Re: Three copies of backup data

2003-02-07 Thread Seay, Paul
In V5.1 and up you can specify linked copy pools and actually create the
primary and the copy pools at the same time, but you have to have 3 tape
drives.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Stapleton, Mark [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 07, 2003 11:00 AM
To: [EMAIL PROTECTED]
Subject: Re: Three copies of backup data


From: Mario Behring [mailto:[EMAIL PROTECTED]]
> I have a TSM Server running on a W2K Server machine. The tape
> library is a IBM 3590, non automated, with two drives. The
> drives are connected to the TSM Server through SCSI
> interfaces (one interface for each drive).
>
> What I need is this: I want to do each incremental backup on
> three different tapes, so I can store these tapes in
> different places. The tapes contents should be identical.

Create a primary tape storage pool (call it primtape1) and two copy storage
pools (call them copytape1 and copytape2). After backups complete, run

backup stg primtape1 copytape1
backup stg primtape1 copytape2
upd stg diskpool hi=0 lo=0

and when diskpool-->primtape1 migration is complete:

upd stg diskpool hi=90 lo=70 (or whatever your diskpool migration
levels are)

(This assumes, of course, that you're using a disk cache to catch client
backups.)

Contents of all three tapes may not be absolutely identical, but they'll be
pretty damn close. (If you're looking to clone the contents of one tape to
another tape via an OS utility like tcopy or dd, you may or may not get a
usable tape.)

--
Mark Stapleton ([EMAIL PROTECTED])



Re: summary-table

2003-02-07 Thread Seay, Paul
The definition of compatibility may not mean that no features get broken
that are not directly related to the integrity of the system from a Tivoli
point of view.  Unfortunately, this stuff has been broken since about
4.2.1.11.  And for the most part, the summary table contents are not
necessary for any TSM externals to work properly.  Customers use it for
reporting via SQL.

After about 6 months of trying to figure out what the cause of the lost data
was, Tivoli Development was about to go crazy.  Luckily, I some reports that
I was running that triggered a thought that the cause of the problem was
because the client session would time out during a backup over the session
timeout length.  At the end of the backup the client was reforming the
session to record the statistics, but the information delivery was to the
wrong session and everything was getting zeroed out.  To fix the problem
required them to make a fix to both the client and the server.  The database
even had to be updated to provide some additional capability based on what I
heard.

Yes, it is a really nasty problem.  How did it get introduced.  I am
guessing they had to rearchitect some things to fix some serious SAN client
issues.  During the process apparently the designs of the client and the
server became incompatible.

Tivoli was very thankful that I was able to point them in the right
direction.  But, as you say, I doubt sessions longer than the session
timeout were tested significantly in their regression testing and I doubt
they did any summary table data verification because it is not critical to
TSM operation nor most of the commands.  It is a customer thing.

There were a lot bigger fish to fry than fixing summary table issues.  The
only reason Tiovli put their heads together and made it a priority was when
I explained that this had broken many customer billing systems for TSM
usage.  That was a real financial integrity issue for customers.  That got
their attention.

Now that Tivoli knows that this is a critical compatibility item, I hope
they will actually test to see if it is working correctly from release to
release and compatibility of client to server.  There are no guarantees.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Allen Barth [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 07, 2003 4:48 PM
To: [EMAIL PROTECTED]
Subject: Re: summary-table


It's stuff like this that makes my shake my head in disbelief...

The summary table was working fine will ALL my clients, regardless of
version, when the server was 4.1.5.  After upgrading ONLY the server to
5.1.5. the byte counts in the summary table are Zeroes.

Hmmm, compatability tested?  I think not!   If so, where are the notices
and gotcha's in the readme's?

--
Al




Matt Simpson <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 02/07/03 03:05 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:    Re: summary-table


At 3:19 PM -0500 2/7/03, Seay, Paul wrote:
>Apparently, the
>client needs to be updated as well to get a complete fix.  I do not
>know
if
>there is a client level for 4.2.3.2 and higher that completely
>eliminates the problem.

According to the info I got from TSM support, for what I think is the same
problem:

>The APARs that describe
>this are IC33840 for the client and IC34693 for the TSM server. Both a
>client fix and a server fix need to be applied. `
>The client fixtests are in >
>  Client fixtests 5.1.5.2 and 4.2.3.1.
>go to...>
>ftp://service.boulder.ibm.com/storage/tivoli-storage-management/
>patches/client/v4r2/
>`
>and then platform and the v423 directory or folder.
>For the V4.2 Win98/Me platform only, the fixtest is 4.2.3.2.
>These fixtests must be used in conjunction with the server fix.


I haven't tried to apply any of the indicated fixes.


--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506 <mailto:[EMAIL PROTECTED]>
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: summary-table

2003-02-07 Thread Seay, Paul
I believe this is more than a server issue.  I have been assured it is fixed
in 5.1.1.2 and higher, but I have not installed it yet.  Apparently, the
client needs to be updated as well to get a complete fix.  I do not know if
there is a client level for 4.2.3.2 and higher that completely eliminates
the problem.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Christoph Pilgram
[mailto:[EMAIL PROTECTED]]
Sent: Friday, February 07, 2003 5:55 AM
To: [EMAIL PROTECTED]
Subject: summary-table


Hi all,

since installing the 4.2.3.2 level (after 4.1.4) on my AIX-server (AIX
4.3.3) I have the problem that in the summary-table the field "BYTES" for a
backup-session of NT and W2K-clients is filled with "0" even though the
client has transfered GigaBytes of data. Restore-sessions of the same client
show a correct number of bytes in that field. Unix-Clients show (what I have
seen) correct entries in that field.

Thanks for help

Chris



Re: Move nodedata (removing a filespace from a copy pool)

2003-02-06 Thread Seay, Paul
Yes, but it is ugly.

Move the filespace to a new primary pool temporarily, or permanently if you
like.

Create a new Copy storage pool and run a backup storage pool command of the
original storage pool command.

The delete the old copy storage pool (you have to delete all the volumes in
that copy storage pool).

Rename the new copy storage pool to the old copy storage pool if you like.

Done.
 
I would like move nodedata to work on copy storage pools, but the way the
bitfile objects are architected this would be very difficult to provide.

What we need is some functionality to setup permanent backup storage pool
excludes and a delete filespace xxx copypool= 

The excludes prevent stuff we do not need copied from being copied to a copy
pool.  The delete allows us to get rid of what we do not want to keep.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Kamp, Bruce [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, February 04, 2003 2:11 PM
To: [EMAIL PROTECTED]
Subject: Re: Move nodedata


Is there anyway to delete a file space from a copy storage pool without
deleting it from all storage pools?

--
Bruce Kamp
Midrange Systems Analyst II
Memorial Healthcare System
E: [EMAIL PROTECTED]
P: (954) 987-2020 x4597
F: (954) 985-1404
---


-Original Message-
From: Daniel Sparrman [mailto:[EMAIL PROTECTED]] 
Sent: Monday, February 03, 2003 7:34 AM
To: [EMAIL PROTECTED]
Subject: Re: Move nodedata


Or, you can simply do a new backup stgpool, to your new copy storagepool.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 HÄGERNÄS
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51




"Kamp, Bruce" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 2003-02-03 13:03
Please respond to "ADSM: Dist Stor Manager"
 
To: [EMAIL PROTECTED]
cc: 
Subject:Move nodedata


I need to move a couple of nodes from one offsite copy pool to another
offsite copy pool.  I tried using the move nodedata command but It gives 
me
the following error:
ANR1719E Storage pool C1_FS_DRMP specified on the MOVE NODEDATA command is
not a valid pool name or pool type. ANS8001I Return code 3.

This is the command I was using:  move nodedata tsmserv from=drmpool
to=C1_FS_DRMP TYPE=any

>From reading the help is it my understanding that I can not do this with 
the
move nodedata?
If not.  Will this work?
1.  Bind nodes to new management class.
2.  Use move nodedata command to move onsite data to new storage pool. 3.
Use backup stgpool from new onsite storage pool to new offsite copy storage
pool. 4.  This is the step I'm not sure about!  How do I remove the data
from 
the
old offsite storage pool?

Thanks,
--
Bruce Kamp
Midrange Systems Analyst II
Memorial Healthcare System
E: [EMAIL PROTECTED] 
P: (954) 987-2020 x4597
F: (954) 985-1404
---



Re: Domino Backup

2003-02-06 Thread Seay, Paul
I recommend TDP for Mail.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Amini, Mehdi [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 04, 2003 9:17 AM
To: [EMAIL PROTECTED]
Subject: Domino Backup


We just got a Domino server installed on a WIN2K server.What is a good
backup practice.  Can we just do a Selective backup daily and will it be
enough for recovery?

Thanks

Mehdi Amini
LAN/WAN Engineer
ValueOptions
12369 Sunrise Valley Drive
Suite C
Reston, VA 20191
Phone: 703-390-6855
Fax: 703-390-2581



**
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender by email,
delete and destroy this message and its attachments.


**



Re: Sun Solaris server backups not ending

2003-02-06 Thread Seay, Paul
Your ethernet adapter in the SUN machine probably took the default of AUTO
and is trying to continually negotiate with the switch.  Change it to Full
or half as appropriate and the problem will go away.  Seen this time and
time again.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 05, 2003 1:38 PM
To: [EMAIL PROTECTED]
Subject: Sun Solaris server backups not ending


Hello again!

I have a SUN server Client Version 4.2.1, Client OS level 5.8 on TSM 4.1.3
os/390.  My problem is that for a regularly scheduled client backup, the
sessions start, but they just sit in idlew/recvw status.  It is just backing
up about 3 GB of data, but it is taking hours to complete when it usually
ends in 15 minutes.  I checked to make sure that the disk pool wasn't full
and nothing else seemed out of the ordinary.  Has anyone had this problem?
Any suggestions on what to look at?  Thank you!

Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: backint(tdp) restore (brrestore) process

2003-02-06 Thread Seay, Paul
It processes the files in order of the .anf file.  As it turns out they are
recorded on the tape in that order.  TDP for SAP will rewind and dismount
the tape if you do not have a mount point hold time on the device class.
So, if you have it set to zero, you will need to increase it a to maybe 2
minutes.  We use 5.  When running multiple sessions (multiple tape drives).
It gets more fuzzy.  I can discuss this offline with you.  And, SAN storage
agents create another wicket.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Van Ruler, Ruud R SITI-ITDGE41 [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 06, 2003 6:36 AM
To: [EMAIL PROTECTED]
Subject: backint(tdp) restore (brrestore) process


Hi

I want to get a grip on this brrestore process !!
How does it actually work  .  what's the logic behind this process ?
Does it use the concerning *.anf file and restores the data files in
sequential order (from top to bottom)  or ... ? Does it unmount, mount the
tape for each single data file ? What if the next data file is on a
different tape ? what if you have multiple restore sessions ? can't the
process behaves in such a way that it restores all concerning file from one
tape and then moves on to the next tape  ?


Ruud van Ruler, Shell Information Technology International B.V.   - DSES-7
SAP R/3 Alliance Technical Support
ITDSES-6 Technical Information and Links page: http://pat0006/shell.htm Room
1A/G03 Dokter van Zeelandstraat 1, 2285 BD Leidschendam NL Tel : +31 (0)70 -
3034644, Fax 4011, Mobile +31 (0)6-55127646

Email Internet: [EMAIL PROTECTED]
[EMAIL PROTECTED]



Re: TSM Question

2003-02-04 Thread Seay, Paul
This is a similar approach to what we use to capture a set of Exchange data.


We determine the primary volumes that the data is on that we need.  Move
those volumes to a new primary pool.  Copy that primary pool to a copy pool,
database snapshot, eject the copy pool tapes, delete the copy pool volumes,
move the primary volumes back to their original location.

Occasionally, we have a business requirement to do this.



Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Steve Harris [mailto:[EMAIL PROTECTED]] 
Sent: Monday, February 03, 2003 6:17 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM Question


Hi Jose

I did some analysis on this for monthly backups and discovered that a
monthly archive and a monthly incremental worked out about the same in our
environment but that the monthly archive was easier to implement.  Our
requirement was just a point-in-time each month with one month each year
kept "forever".

In your case, if it is a real requirement that every backup be kept, Create
a new copypool extend your retention to 210 days every 180 days, backup all
primary pools to your new copy pool take a database snapshot and ship it and
all your new copypool tapes to storage for 4 ½ years Delete the volumes in
the new copypool 
The other thing that you might like to do is run q content on each of the
volumes and drop it into a database somewhere, so that you can tell if you
have a particular file and which tapes are required to get it back without
huge amounts of effort.

HTH

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia

>>> [EMAIL PROTECTED] 04/02/2003 0:21:21 >>>
Hi - Wondering if I can get ideas or suggestions

We need to retain all backups for a 5 year period. I currently have a
retention period for backups of  180Days.  We are debating the scenarios -
Wondering if I can poll the list to get possible suggestions or ideas for
handling this. Some things that we have been discussing are

1) extend the retention period to 5 years and set nolimit to versions
created.. deleted etc
2) create separate mgmtclass for once a month backups with retention period
of 5 years
3) generate backupsets once a month and retain for 5 years
4) create separate TSM instance to provide 5 year retention
5) Using archive with 5 year retention

Would appreciate any ideas and potential ramifications

I currently have 180 unix and intel clients, storing 15TB onsite and 15TB
offsite. My db is approx 55GB

I am using TSM 5.1.5.4 on an IBM RS/6000 (H80 with 6cpus and 8GBs of ram)
running AIX 5.1ML03. Attached are 2 x3584 with 8 LTO fibre drives for
backups. For archive I use 3494 library (L12 and D12 with 4x3590H1a drives).



Thanks
Jose Rivera
Unix Technical Consultant





*
This message and any attachments is solely for the intended recipient. If
you are not the intended recipient, disclosure, copying, use or distribution
of the information included in this message is prohibited -- Please
immediately and permanently delete.



**
This e-mail, including any attachments sent with it, is confidential 
and for the sole use of the intended recipient(s). This confidentiality 
is not waived or lost if you receive it and you are not the intended 
recipient(s), or if it is transmitted/ received in error.  

Any unauthorised use, alteration, disclosure, distribution or review 
of this e-mail is prohibited.  It may be subject to a statutory duty of 
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this 
e-mail in error, you are asked to immediately notify the sender by 
telephone or by return e-mail.  You should also delete this e-mail 
message and destroy any hard copies produced.
**



Re: Restoring data from Copypool tape

2003-02-04 Thread Seay, Paul
You have to do a CHECKIN LIBVOLUME command with the parameters particular to
your library.  The tape will automatically come back as status=private even
if you specify status=scratch because they are still known to have good data
if that is really the case.

Now, you are requesting these for a reason.  I hope the reason is the
primary tapes are in a destroyed state and you need to do a restore volume
command for the primary volume(s).  Or, you have marked the primary volumes
as destroyed because they are and the restore volume command or client
restore command you are trying to execute is look for the copypool volumes.

If you deleted the filespaces, the data is no longer referenced in the TSM
database.  The only recourse you have is to backup your current TSM database
and restore a previous TSM database backup that still knows about the
copypool volumes.  Then restore the saved copy of the TSM database before
you started the operation.

DO NOT DO A DELETE VOLUME x DISCARDDATA=YES.  That is a big hammer that
will wipe all the references to the data on the copy pool and primary
volumes.  The proper action is either do a RESTORE VOLUME of the primary
volume that is destroyed or mark it destroyed and restore from the copy
volumes as described above.  The RESTORE VOLUME basically rebuilds your
primary pool back to what it was before.  It will put the data on other
tapes in the primary pool.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Hope Zaleski [mailto:[EMAIL PROTECTED]]
Sent: Monday, February 03, 2003 8:12 PM
To: [EMAIL PROTECTED]
Subject: Restoring data from Copypool tape


I am trying to restore data that is located on Two copypool tapes that are
offsite. How do I put the tapes into the library with read access to restore
data?




Hope Zaleski
Network Assistant/Faculty-Staff Support Coordinator
Carthage College
Kenosha Wisconsin
262-551-5748



Re: Tape status

2003-02-03 Thread Seay, Paul
The typical cause is the tapes were not labeled.  You checked them in, but
you should have done a LABEL LIBVOLUME .. CHECKIN=Scratch.  Just check
them back out and do the LABEL command.

This is the normal action TSM takes to get a bad tape marked for no reuse
when it is  not labeled.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Bengani, Thabani [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 29, 2003 5:24 AM
To: [EMAIL PROTECTED]
Subject: Tape status


Hi Tsmers,

I have a problem with my tapes in the LTO 3583 library. I checked in three
scratch tapes for my weekly backup schedule and they are now marked private.
The problem is, there is no data in these tapes they are empty. Anyone who
has had this before or know the solution to it.

Thank you in advance.

Thabani Bengani
Property & Asset Finance - IT
Nedbank Corporate
Ext. 364 2113
Cell. 0825716778
[EMAIL PROTECTED]




The contents of this email message and any attachments relating to the
official business of Nedbank Limited ("Nedbank") are proprietary to Nedbank.
They are confidential, legally privileged and protected by law. Nedbank does
not own nor endorse any other contents. Views and opinions are those of the
sender and do not represent Nedbank's views and opinions nor constitute any
commitment by or obligation on Nedbank unless otherwise stated or agreed to
in writing by Nedbank.

The person addressed in this email message is the sole authorised recipient.
Please notify the sender immediately if it has unintentionally reached you
and do not read, disclose or use the contents in any way.

Nedbank cannot assure that the integrity of this communication has been
maintained, nor that it is free of errors, viruses, interception or
interference. Nedbank, therefore, does not accept liability or legal
responsibility for the contents of this email message or its effect on
electronic devices.



Re: Image backup CPU time.

2003-02-02 Thread Seay, Paul
This is about what I would expect.  Eventhough you are using the SAN agent,
you could be still using the IP stack internal to the machine to get the
data from the client to the SAN agent.  And, if this is TDP for Exchange or
TDP for SQL, there is even more CPU overhead.  Without more information, I
can not analyze what could be driving up the CPU.  The other piece is the
type of disk technology you have attached and are you using client
compression.  I would not be surprised if you get no improvement by adding
the additional CPUs.  If your average was 90 to 100, then yes, but likely
not for 70 to 90.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Martinez, Matt [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 31, 2003 10:35 AM
To: [EMAIL PROTECTED]
Subject: Image backup CPU time.


First some stats about the servers involved

TSM Server- Windows 2000 TSM Server-Ver 5.1.1
TSM Client Windows 2K- 2CPU's@700MHz each 1Gig of RAM TSM Client
Version-5.1.1 with SAN Agent.

When I do an image snapshot backup on the client the CPU Averages around
70-90%. Is this normal? Will adding 2 more CPU Alleviate this problem and
improve performance. I am using 2 LTO Drives to do this backup and I am
averaging around 32MB/sec. Any help will be appreciated.



Thank You,
Matt Martinez
NT System Administrator
IDEXX Laboratories, Inc
207-856-0656
Fax:207-856-8320
[EMAIL PROTECTED]



Re: NTFS Permission

2003-02-02 Thread Seay, Paul
There were some permission restore issues at this level, I do not remember
what they were.  This level is not going to be supported after 4/15.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Edgardo Moso [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 31, 2003 8:42 AM
To: [EMAIL PROTECTED]
Subject: NTFS Permission


Does anybody experience a problem restoring the NFTS permission in the C:\
drive of windows 2000 server?

We did test the restore of files to a different directory in a c: drive and
to an F: drive.   In C: drive it's unsuccessfull. It has no problem in F
dirve.

By the way,  my TSM client is 4.2.1   and my TSM Server is also in 4.2.1.


Thanks,

Ed Moso
Sr. System Programmer
IS-Storage Management
Kindred Helath Care, Inc.
Louisville, Ky



Re: 3590 Partitioning

2003-02-01 Thread Seay, Paul
This is an OS support and boot issue.  HP has been slow to support open SAN.
This ability has only been available on AIX for a little while.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 28, 2003 9:50 AM
To: [EMAIL PROTECTED]
Subject: Re: 3590 Partitioning


Is it even possible to boot from a fibre-attached tape drive? At DR tests
with a HPUX servers, it wasn't possible to use the Ignite tape in
fibre-attached tape drive. A SCSI-attached drive was required to boot from.

Kurt

>-Original Message-
>From: Steve Harris [mailto:[EMAIL PROTECTED]]
>Sent: Monday, January 27, 2003 6:39 PM
>To: [EMAIL PROTECTED]
>Subject: 3590 Partitioning
>
>
>HI All,
>
>This is  a 3590 question  rather than TSM as such, but this is the best
>forum for it.
>
>
>I need to take system images of several AIX boxes each week to SAN
>Attached 3590E drives in my 3494. It seems like overkill to devote a
>whole 3590 tape to each image as they will only be a few gig each.
>
>I stumbled across some doc which implies that a 3590 tape can be
>partitioned into smaller segments which can then be used independently.
>(see items 36 and 38 on the tapeutil menu).  However, this is old doc,
>and I assume the feature is from the early days of 3590 when 10GB was
>an enormous amount of storage.
>
>Has anyone used this partitioning feature? in what circumstances? Are
>there any gotchas?
>
>Thanks
>
>Steve Harris
>AIX and TSM Admin
>Queensland Health, Brisbane Australia.
>
>
>
>
>**
>This e-mail, including any attachments sent with it, is confidential
>and for the sole use of the intended recipient(s). This confidentiality
>is not waived or lost if you receive it and you are not the intended
>recipient(s), or if it is transmitted/ received in error.
>
>Any unauthorised use, alteration, disclosure, distribution or review of
>this e-mail is prohibited.  It may be subject to a statutory duty of
>confidentiality if it relates to health service matters.
>
>If you are not the intended recipient(s), or if you have received this
>e-mail in error, you are asked to immediately notify the sender by
>telephone or by return e-mail.  You should also delete this e-mail
>message and destroy any hard copies produced.
>**
>



Re: management class question???

2003-02-01 Thread Seay, Paul
In the management class you set the primary storage pool.  To run the first
backup just direct it to the TAPE primary management class that is the
NEXTPOOL for the DISK management class.  After the first backup, change the
storage pool to the DISK management class.

Backups are determined by the node/filespace in the backups table regardless
of the storage pool the reside on.  Remember, data on a server can go to
limitless storage pools (actually TSM does have a limit on the total number
of storage pools).  All you are doing here is directing the data to another
storage pool for the first backup.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 31, 2003 11:19 AM
To: [EMAIL PROTECTED]
Subject: management class question???


Hi everyone!

I am creating a new management class for a user that wants all files backed
up incrementally no matter if it is open or not.  I know that I have to use
the dynamic parameter for copy serialization.  On a normal day I want the
backup to go to the disk pool, but the first time this server backs up it
will back up the entire 202 GB server.  My problem is that the disk pool is
only 34 GB.  I want to direct the first backup directly to tape and all
future backups to disk.  How do I do this?  If I tell the user to first use
a MC that goes directly to tape and then use a MC that goes to disk for all
others, will it not try to do a backup of all files for the second
incremental backup since it is technically using a new MC class or does it
go by files that exist?  Thanks in advance for any help!!!

Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: VERY HIGH %SYS CPU

2003-01-28 Thread Seay, Paul
Check your default IP packet sizes and IP performance implementation.  The
more packets you have the more overhead to process them.  We do not have a
SUN TSM server, but we run SAN Storage agents and Clients on SUN.

Also, check your TDP for SAP implementation and make sure you are using the
most optimal for you environment.

I would expect you have some artificial cause of this as mentioned below.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Paul Ripke [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 28, 2003 6:01 PM
To: [EMAIL PROTECTED]
Subject: Re: VERY HIGH %SYS CPU


On Wednesday, Jan 29, 2003, at 00:11 Australia/Sydney, Broderick, Sean
wrote:

> Hi,
>
> The CPU usage (%sys) is extremely high, like 80 - 90%, on the TSM
> server (Sun V880 running TSM v5.1.5) during the backup window.
> Particularly when
> TDP R3 clients are attempting their SAP backups via backint / brbackup
> and
> as such the throughput is extremely poor (<9MB/s direct to disk cache
> via
> gigabit network).

How are your disk stgpool volumes configured? Is the system paging? How much
RAM do you have? Any errors on your network interfaces? How fast does an ftp
run?

Since the vast majority of the work done by dsmserv during backups is I/O, a
high sys% CPU is to be expected. OTOH, our Sun TSM servers can hit 9 MB/s on
100 Mb ethernet, with much older hardware.

Cheers,
--
Paul Ripke
Unix/OpenVMS/DBA
101 reasons why you can't find your Sysadmin:
68: It's 9AM. He/She is not working that late.
-- Koos van den Hout



Re: 3494 and dsmserv restore db

2003-01-28 Thread Seay, Paul
If you have the 3494 mtlib code installed for Windows you can issue this
command and not even go to the library.  The command is:

mtlib -l [library] -m -x [device serial] -V [volume number]

After TSM is done with the tape it will unload it but not put it away.

mtlib -l [library] -d -x [device serial]

I do this all the time when a tape gets left in a drive for some unknown
reason.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 27, 2003 3:34 PM
To: [EMAIL PROTECTED]
Subject: Re: 3494 and dsmserv restore db


As a matter of fact, I had to do this last Friday to restore a Windows TSM
server. I did exactly as described below in the support item Lloyd posted.

You edit a copy of your devconfig file as below.
Then when you run dsmserv restore db, the first thing you get is a MOUNT
message. Put your 3494 in PAUSE, open the door, throw the tape in the drive
requested in the MOUNT message.

After the restore is done, remember to take the tape OUT of the drive. Put
back your original devconfig file and restart the server.

It was much less painful than I expected!


-Original Message-
From: Lloyd Dieter [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 27, 2003 2:38 PM
To: [EMAIL PROTECTED]
Subject: Re: 3494 and dsmserv restore db


Steve,

Attached (at the end) is a snippet from a howto on IBM's support site that
you may want to look over...

-Lloyd

On Mon, 27 Jan 2003 14:04:43 -0500
Steve Roder <[EMAIL PROTECTED]> wrote:

> Hi All,
>
>  According to the doc., dsmserv restore db only supports manual
> and scsi libraries.  Has anyone on this list restored a db via a 3590
> drive inside a 3494?  Since the 3494 is not libtype=scsi, I am
> thinking that I will have to put my 3494 into pause or manual mode,
> and fake TSM into thinking the 3590 is standalone, and then manually
> insert the dbbackup volume into the correct drive.
>
> Has anyone done this?
>
> Thanks,
>
> Steve Roder, University at Buffalo
> HOD Service Coordinator
> VM Systems Programmer
> UNIX Systems Administrator (Solaris and AIX)
> TSM/ADSM Administrator
> ([EMAIL PROTECTED] | (716)645-3564)
>

Problem
One may need to temporarily configure your 3494 library as manual in order
to perform a database restore. The following provides instruction on how to
change your devconfig file to do so.

Solution
First, make a copy of your devconfig.out file and rename it. Do this so that
after the DB restore completes you can just rename the file back to
devconfig.out without having to re-write anything.

Next, open the current devconfig.out file in a text editor. All but four
lines will need to be moved.

Below is an example of the edited devconfig file:

/* Device Configuration */
DEFINE DEVCLASS 3494CLASS DEVTYPE=3590 LIBRARY=3494LIB
SET SERVERNAME SERVERNAME
DEFINE LIBRARY 3494LIB LIBTYPE=manual
DEFINE DRIVE 3494LIB DRIVE1 DEVICE=/dev/rmt/tsm_drive1

For TSM Version 5.1 and higher, edit as follows:

/* Device Configuration */
DEFINE DEVCLASS 3494CLASS DEVTYPE=3590 LIBRARY=3494LIB
SET SERVERNAME SERVERNAME
DEFINE LIBRARY 3494LIB LIBTYPE=manual
DEFINE DRIVE 3494LIB DRIVE1
DEFINE PATH SERVERNAME DRIVE1 SRCType=server DESTType=drive LIBR=3494LIB
DEVICE=/dev/rmt/tsm_drive1 ONLINE=YES

Save the file and issue your restore syntax. One will then be prompted to
mount the volume. After acknowledging the mount, the resore should run to
completion.

When the restore has completed, discard the edited devconfig file and
replace it with the old one. Assuming the restore was successful, the server
will start up and reference the proper devconfig file.


--
-
Lloyd Dieter-   Senior Technology Consultant
 Registered Linux User 285528
   Synergy, Inc.   http://www.synergyinc.cc   [EMAIL PROTECTED]
 Main:585-389-1260fax:585-389-1267
-



Re: Backup fails when files fail

2003-01-27 Thread Seay, Paul
Consistent return codes was a Share requirement for years so that production
processes could actually be coded and have a determinable consistent result.
You may not like the implementation, but it is what was asked for.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 27, 2003 8:02 PM
To: [EMAIL PROTECTED]
Subject: Re: Backup fails when files fail


>We have an HP-UX TSM client running 5.1.1.0 code. It connects to a
>4.2.3.3 server running under OS/390. A 'dsmc incremental' command on
>the HP-UX system failed with an exit status of 4, apparently because
>three files failed with ANS1228E and ANS4045E (file not found)
>messages. I remember reading about this kind of behavior in Version 4
>clients. Has Tivoli managed to resurrect this bug in Version 5?

Give IBM (formerly Tivoli) some credit here...  Customers have been
clamoring for years for useful return codes, and this is an example of how
they can be useful.  Acting on the return codes is optional, after all.

  Richard Sims, BU



Re: Scalar 24 remotely

2003-01-27 Thread Seay, Paul
With TSM V5.1 and below a drive has to be defined, visible, and online to
the TSM server and the SAN Storage Agent.  Essentially, the SAN Storage
Agent is a cut down TSM server code set that does its database I/O remotely.
You have another option, install the TSM server code where you were going to
install the SAN agent.  Consider one important item, even if TSM did support
what you wanted.  The link over the IP potentially has a lot of meta data to
move back and forth.  With a single thread SAN like solution, this may not
be the best choice.

TSM V5.2 will likely have improvements in SAN attachment configuration
operations to address drives being visible to the TSM server and all SAN
clients, but there is nothing announced.  So, we will have to see.

The library probably supports LAN-Free connections by using a SCSI address
to talk to the library.

Paul D. Seay, Jr.
Technical Specialist
Northrop Grumman Information Technology
757-688-8180


-Original Message-
From: Kai Hintze [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 27, 2003 6:33 PM
To: [EMAIL PROTECTED]
Subject: Scalar 24 remotely


1) Can you run LAN-free when your SAN and tape drives are in one location
and your TSM server is in another location?

We have our main data center with the big TSM server in Boise, Idaho. We
have a smaller data center with a small SAN in Phoenix, AZ, about 2000km
away. The Phoenix data center has its own SAN that is just a little too
large to back up across the WAN. We're wondering if we can avoid having to
buy a separate TSM server and just put a small library in Phoenix that would
run LAN-free, with IP robot controller that talks to the TSM server in
Boise. Conceptually, can it work?

2) What do you know about/think of Scalar 24 libraries, specifically for
LAN-free work?

The small library we are looking at is a Scalar 24 from Adic. I've never
actually seen one, but the brochures talk about LAN-free connections. If the
remote use concept is possible, will this library work?

Thanks,
Kai.



Re: Some information...

2003-01-23 Thread Seay, Paul
NO
NO
YES

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Nicolas Savva [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 23, 2003 6:54 AM
To: [EMAIL PROTECTED]
Subject: Some information...


Hi,

I need some help for the following:

1. Is TSM capable to perform System Backup on AIX box?
2. Is TSM capable to perform Volume Group Backup?
3. Is TSM capable to perform File System Backup?




Nicolas Savva
Assistant System Specialist
Information Services
Laiki Group
11 Arch Makarios Ave.
CY-1065 Nicosia
CYPRUS
Tel  +357 22812473   Fax +357 22812583
Email: [EMAIL PROTECTED] Web Site:  http://www.laiki.com


Privileged/Confidential information may be contained in this message and may
be subject to legal privilege. Access to this e-mail by anyone other than
the intended recipient is unauthorised. If you are not the intended
recipient (or responsible for delivery of the message to such person),  you
may not use, copy, distribute or deliver to anyone this message (or any part
of its contents) or take any action in reliance on it. In such case, you
should destroy this message, and notify us immediately.

If you have received this email in error, please notify us immediately by
e-mail or telephone and delete the e-mail from any computer. If you or your
employer does not consent to internet e-mail messages of this kind, please
notify us immediately.

All reasonable precautions have been taken to ensure no viruses are present
in this e-mail. As we cannot accept responsibility for any loss or damage
arising from the use of this e-mail or attachments we recommend that you
subject these to your virus checking procedures prior to use.

The views, opinions, conclusions and other information expressed in this
electronic mail are not given or endorsed by Laiki Group unless otherwise
indicated by an authorised representative independent of this message.





Re: Tape Requests Never Mounting

2003-01-22 Thread Seay, Paul
I hope this fixed it for you.  You may want to open a problem record just to
be sure there was a fix included in 5.1.6 for LINUX tape drivers.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Mitch Sako [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 22, 2003 12:58 PM
To: [EMAIL PROTECTED]
Subject: Re: Tape Requests Never Mounting


I can't rule out a H/W problem, but...I did upgrade to 5.1.6 and the problem
went away.  How solid is the tsmscsi stuff in 5.1.5 for Linux?

Mitch

"Seay, Paul" wrote:

> You likely have a marginal SCSI cabling problem, missing terminator,
> tape driver or something like this.  This is probably not a TSM issue.
>
> Paul D. Seay, Jr.
> Technical Specialist
> Naptheon Inc.
> 757-688-8180
>
> -Original Message-
> From: Mitch Sako [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, January 21, 2003 9:39 PM
> To: [EMAIL PROTECTED]
> Subject: Tape Requests Never Mounting
>
> I have a new 5.1.5 test server running on Linux that puts up mount
> requests for migrations to a set of 4mm manual drives and the server
> is never acknowledging that the tape is mounted and therefore never
> starts the migration.  I noticed that when I was doing manual labeling
> that the first few requests were not being serviced but they magically
> started acknowledging the tape in the drive and the labeling process
> continued as normal.
>
> I've checked the device class, path, drive and all of the other
> relavent things and they all look clean.  Proof of this to me is that
> the labels were put on the tapes just fine.  Now, when I test
> migration the drives never acknowledge the tapes.
>
> I'm thinking that there is some switch or setting that I need to
> toggle to make this work.
>
> Any ideas?
>
> -ms



Re: TSM offsite backups

2003-01-22 Thread Seay, Paul
This is exactly what we do.

Basically, you run two backup storage pool commands for each primary pool to
two different copy pools.  Naming conventions become really important here
to help prevent mistakes and to use masks.

We call our pools like this:

DSK_Primary Disk Pool
TAPE_   Primary Tape Pool
CPY_ONSITE_ Onsite Copy
CPY_OFFSITE_Offsite Copy

There is a last qualifier used for all of the same of a pool.  Remember, you
can rename storage pools.

In DRM you can specify a mask for the offsite pools if you want to use the
MOVE DRMEDIA command without specifying a storage pool.  We have a lot of
elaborate code to do this process because we use closed box offsite storage
and manage moving the data on returning tapes to new tapes before they come
backup.  We have recently, started using the location field to hold a lot of
information, such as the return date and issue a MOVE DRMEDIA command for
each tape rather than using "*".  This way the location field in the drmedia
and volumes tables contains lots of information that can be reviewed.  Based
on what I can determine the location field in the drmedia table is derived
from the location field in the volumes table.  So, if you update the
location information in the volumes table it shows in the drmedia table.  By
default the location field is derived from the value in DRMSTATUS as well as
the copy storage pool names.

There are many ways to implement DRM and everyone needs to decide what their
requirements are.  Ours is a unique way.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Tommy Templeton [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 22, 2003 1:50 PM
To: [EMAIL PROTECTED]
Subject: TSM offsite backups


Hi,

I need to start sending tapes offsite but I want to keep 2 sets of tapes
onsite and have another pool for offsite tapes.
I have the available tapes to use for the offsite pool. I have DRM
installed. Our TSM server is running Version 4.2 Level 1.9 , TSM B/A client
is 4.2 on AIX 4.3.3.
I just need some tips on how to get this going.

thanks,

Tommy Templeton
Senior System Administrator
DFA-MMRS
601-359-3106
e-mail - [EMAIL PROTECTED]



Re: database reorganisation

2003-01-21 Thread Seay, Paul
Roger, I agree with you on your point; however, there is one consideration
which we have found requires and unload/load.  We do a lot of delete
filespaces because we have a lot of server migrations and are required to
keep the old image for 1 year, then delete it.  This is also true for
filespaces converted to unicode.  So, I have hundreds of very large
filespaces that have been delete, bloating my TSM database to 134GB today.
Less than half of this database is probably good data.  For the 20GB
database on fast hardware, probably makes no sense to reorganize.  My 134GB
database now takes 90 minutes to backup on a good day (no other activity)
and 145 minutes on a heavy activity day.  I will cut my backup time to less
than an hour every day and avoid the collision with my other processing.
So, I say a reoganization depends on what benefit you are going to get that
is actually long term.  Deleted filespace recovery is longterm.  Backup time
is long term, in this example.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Roger Deschner [mailto:[EMAIL PROTECTED]] 
Sent: Saturday, January 18, 2003 12:00 PM
To: [EMAIL PROTECTED]
Subject: Re: database reorganisation


I'm with Dwight. The correct answer is "never". And I'm one of those people
who is constantly tweaking the disk configuration, comparing different
mirroring schemes, evaluating RAID thingies, adjusting buffer sizes, and so
on, but I never unload/load my database.

An ADSM/ITSM (recursive acronym) database is naturally fragmented to a
degree. It reaches a steady-state, and then it doesn't get any worse.
Expiration will create enough space for new objects to be added. You can
unload/load, and you can achieve a performance boost, but only for a while.
It will return to its natural steady-state of fragmentation after some
number of weeks or months. Then you'll have to do it again.

An unload/load DB operation is costly - FOR YOU! It requires considerable
downtime, no small amount of risk, quite a bit of your effort and time for
oversight, and a whole lot of just plain angst. Then after a while you have
to do it all over again. It becomes a treadmill.

Let's say for discussion it improves performance by 25% right off the bat,
but that's only a temporary improvement. If I spend the same amount of time
lobbying my employer to get a 25% faster server computer with 25% more disk
space, so it can run a naturally fragmented database acceptably fast, that's
a permanent improvement.

I would only consider it in a case where something had happened to
drastically reduce the number of objects in the database - such as removing
a group of very busy nodes, or splitting a server into two servers.

So, it's a downtime issue - our end-users expect to be able to restore their
files 24/7. It's also a "labor relations" issue, between us, the **SM
administrators, and our employers. It's much more worthwhile to compensate
for the inevitable degree of fragmentation with hardware, compared to the
cost to your sanity, blood pressure, and marriage, of periodic unload/load
db operations. Hardware is cheap compared to those other things that really
matter to us, and that's why I *NEVER* unload/load db.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]
 Life is too short to drink cheap beer.


On Fri, 17 Jan 2003, Cook, Dwight E wrote:

>If you want to remove a DB volume because you aren't using that much 
>space based on the Pct. Util., but you can't because the max reduction 
>isn't large enough.
>
>Be it good or bad, I can't say but we've had 10 TSM servers (most are 
>going on 7 years old now) and the only place I've ever unloaded & 
>loaded the TSM data base has been in our test environment... just to 
>see what goes on...  (our TSM db's are between 8 GB & 32 GB's)
>
>Dwight
>
>
>
>-Original Message-
>From: Francois Chevallier [mailto:[EMAIL PROTECTED]]
>Sent: Friday, January 17, 2003 4:31 AM
>To: [EMAIL PROTECTED]
>Subject: database reorganisation
>
>
>What are the best criteria to know if it's time to reorganize the tsm 
>database (by unloaddb /loaddbb) Sincerly
>
>François Chevallier
>Parc Club du Moulin à Vent
>33 av G Levy
>69200 - Vénissieux - France
>tél : 04 37 90 40 56
>



Re: Tape Requests Never Mounting

2003-01-21 Thread Seay, Paul
You likely have a marginal SCSI cabling problem, missing terminator, tape
driver or something like this.  This is probably not a TSM issue.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Mitch Sako [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 21, 2003 9:39 PM
To: [EMAIL PROTECTED]
Subject: Tape Requests Never Mounting


I have a new 5.1.5 test server running on Linux that puts up mount requests
for migrations to a set of 4mm manual drives and the server is never
acknowledging that the tape is mounted and therefore never starts the
migration.  I noticed that when I was doing manual labeling that the first
few requests were not being serviced but they magically started
acknowledging the tape in the drive and the labeling process continued as
normal.

I've checked the device class, path, drive and all of the other relavent
things and they all look clean.  Proof of this to me is that the labels were
put on the tapes just fine.  Now, when I test migration the drives never
acknowledge the tapes.

I'm thinking that there is some switch or setting that I need to toggle to
make this work.

Any ideas?

-ms



Re: 64-Bit Filesystem

2003-01-21 Thread Seay, Paul
Which 64 bit file system?

Whenever posting a question like this please provide:

Client or Server type and OS level.
Filesytem type (JFS, JFS2, Veritas, NTFS, XFS, UFS, etc).

Often folks just do not bother to respond to a question like this because of
the missing information.  This list is a great resource.  Make it easy for
the folks to respond.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Sascha Braeuning [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 21, 2003 2:40 AM
To: [EMAIL PROTECTED]
Subject: 64-Bit Filesystem


Hello TSMers,

does anyone know, if TSM Client supports backing up a 64-Bit Filesystem.
Where can I find some hints in the docs?


MfG
Sascha Bräuning


Sparkassen Informatik, Fellbach

OrgEinheit: 6322
Wilhelm-Pfitzer Str. 1
70736 Fellbach

Telefon:   (0711) 5722-2144
Telefax:   (0711) 5722-1634

Mailadr.:  [EMAIL PROTECTED]



Re: Triggered dbbackup doesn't start immediately

2003-01-21 Thread Seay, Paul
Apparently, the BACKUP DB command does some pre processing now.  Not sure
what.  My system can take 3 to 5 minutes.  I think what is happening is some
log commit logic is running for a while to try to unpin as much of the log
as possible, but that is just a guess.  Then, my backup starts.  Mine are
not triggered backups, they are scheduled to occur before any significant
processing starts.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: PAC Brion Arnaud [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 20, 2003 10:32 AM
To: [EMAIL PROTECTED]
Subject: Triggered dbbackup doesn't start immediately


Hi *SM'ers

I noticed an embarasting problem on our TSM server (4.2.3.1) : I defined a
Dbbackup trigger that should start a db backup as soon as our log reaches 70
% . What happens is that I see the backup triggerered, but that the
effective copy on tape only starts approximativeley 20 -25 minutes later,
sometimes leading to a server crash, due to log saturation (actual log size
: 12 GB). As far as I know a DB backup has the highest priority, so how can
it be that it "waits" so long before proceeding ? More confusing, Dbbackups
initiated manually or per schedule do not behave the same way : copy on tape
starts immediately ...

01/20/03 15:09:18 ANR4552I Full database backup triggered; started as
process 340. 01/20/03 15:46:23 ANR4554I Backed up 48128 of 4159601 database
pages.
01/20/03 15:46:53 ANR4554I Backed up 127536 of 4159601 database pages.
01/20/03 15:47:23 ANR4554I Backed up 206320 of 4159601 database pages.
Did anybody allready noticed such a behaviour, or have an explanation on
what could be happening there ? Thanks in advance ! Cheers. Arnaud

 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



Re: Dublicate TSM's DB or Data Tapes (3590, LTO) ... not within T SM

2003-01-21 Thread Seay, Paul
This is why I run and incremental and a dbsnapshot every day.  And two TSM
database backups for every offsite run.  And a primary, onsite copy, and
offsite copy (DR) for filesystem backups.  And, applicaton database and
application log file offsite copies to roll forward a bad application backup
tape.  And, we are considering an additional offsite copy for filesystem
recoveries.

The bottom line is tape is tape.  It is a mechnical device that has lots of
parts that can break and create bad tapes and planning recovery from failed
disaster recovery is important.

We have at least 1 tape with write errors a week.  We immediately movedata
that data to another volume to be sure we can read it and recover if we
cannot.  We have at least 1 drive that goes wacko every 6 months and 50% of
the time eats the tape.  Ours are 3590-E1A drives.

Our drives run nearly 7x24 that are online right now, with about twice the
number coming online soon.  This will relieve the pressure and provide us
with a really nice solution.  Thanks to our previous backup solution that
took more than twice the hardware (16 Magstar drives online to TSM now, 38
total soon) and still could not get the backups done with half the storage
we have now.

The beauty of TSM is you can actually design a nearly bullet proof
environment.  It costs a little more.  In our case, it takes about half the
hardware to do this kind of TSM implementation than it did with our previous
completely single point failure full/incremental solution.

The only place we feel that there is a short fall is TSM does not support
multiple simultaneous copies of database backups for performance and time
saving and in the case of incremental database backups a single point of
failure issue that can be helped by running dbsnapshots more often or
sending the incremental backups to disk on a different disk subsystem (maybe
a case for one of these cheap ATA arrays).

Ultimately, a well planned setup is far better than a proprietary one up
solution from some fly by night vendor that will "save the world".  I know
of some, but will not recommend any because you will not be happy.

The solution you are asking for will require you to manage a bunch of manual
stuff outside of TSM with TSM having no visibility to where the second copy
is.

Yes, we have a lot of hardware to save our environment.  I suggest you
document your failure losses well, do a total cost of ownership to prevent a
future failing, and procure a solution that is affordable, and hits a
recovery comfort target that your financial and executive management can be
satisfied with.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Othonas Xixis [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 21, 2003 9:29 AM
To: [EMAIL PROTECTED]
Subject: Dublicate TSM's DB or Data Tapes (3590, LTO) ... not within TSM
Importance: High


Hello TSM'ers,

Does anyone know of a way or a company where the TSM DB or Data Tapes can be
duplicated (in a hardware level) ?

I do understand the duplication methods within TSM and I am not interested
at this time because I can't use them. We are in a real disaster recovery
situation, not where we lost the TSM server, but where we lost portions of
the data on the tapes, and now we have legal requirements that we need to
fulfill.

Thanks in advance for yr assistance.

Brgds,

Othonas Xixis



Re: oh great SQL gurus..... statement for validating that all of the NT servers all have valid registry backups

2003-01-21 Thread Seay, Paul
The bitfile is only needed to find out what tape an object is on.

This is a nasty issue.  I am guessing what you really want is to know if a
specific machine registry gets broken and is not getting backed up so you
can address whatever happened.  I think a select of the inactives not in the
actives would answer the question, but not the problem.  If you delete
anything or change the objects will still show up, but the object may be
broken in a way that it is unusable.

The following SQL statement may help you.  There is a way to concatenate the
node name to the hl_name and do it for your entire environment, but the SQL
solution set could get so big the TSM server will never give the answer
back.  So, I recommend you get a list of the nodes, make this a macro and
run it for each node.

select hl_name from backups where node_name='[node_name]' and filespace_name
= '\\[node_name]\c$' and hl_name like '\ADSM.SYS\%\MACHINE\%' and
state=inactive_version and hl_name not in (select hl_name from backups where
node_name='ABACODEV' and filespace_name = '\\[node_name]\c$' and hl_name
like  '\ADSM.SYS\%\MACHINE\%' and state=active_version)

The other option is, there are only 4 objects in an NT world based on what I
can see, SAM, SECURITY, SOFTWARE, SYSTEM.

select node_name, count(hl_name) from backups where node_name='[node_name]'
and filespace_name = '\\[node_name]\c$' and hl_name like
'\ADSM.SYS\%\MACHINE\%' and state=active_version and ll_name in ('SAM',
'SECURITY', 'SOFTWARE', 'SYSTEM') group by node_name

Ultimately, registry integrity is a windows issue and you should be running
registry integrity checking software if that is your problem.  Remember,
recovering a Windows server is not necessarily an "always can event" because
often the problem is the registry that is broken and the backups are damaged
as well.  So, very often you are rebuilding the system and restoring the
application data after a server crash.  I can tell you there is better news.
Under Windows 2000 a real API was created to save the "SYSTEMOBJECT".  That
is a good or a bad thing depending on your point of view.  The way the
system object is saved does some integrity checking and saves the dlls and
other software that are part of the software image.  Now the bad side is TSM
has had a slew of problems getting the system object expiration to work
correctly.

By Windows V10, Paul's code name "RIN TIN TIN", we may have a Windows system
that can do a recovery like a mainframe and save the world.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Lisa Cabanas [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 21, 2003 11:31 AM
To: [EMAIL PROTECTED]
Subject: oh great SQL gurus. statement for validating that all of the NT
servers all have valid registry backups


Hello again,

I was hoping one of you nice SQL gods/goddesses (Paul???) would help me
construct a SQL select statement that won't halt my server, won't take days
to run, and will help me validate that all of our NT servers all have active
and inactive copies of c:\adsm.sys\...\MACHINE\*

Now that I get to looking I don't easily see how to get what I am
looking for with a select-- is this actually going to be a show bitfile
kind-of-thing? (yuk)

TIA

lisa



Re: Windows 2000 System State

2003-01-21 Thread Seay, Paul
Yes, but you have to be at 4.1.4 and higher I believe.  It may be higher
than that.  It is called the system object for Windows 2000 clients and
contains a lot more than just the registry.  It contains dlls and many other
files considered part of the system object.  The biggest issue right now is
the whole system object gets saved each time and amounts to as much as 2GB
on servers like SQL or Exchange servers.  You want to use a special
management class for the system object and limit it to probably a couple of
weeks.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 17, 2003 2:07 PM
To: [EMAIL PROTECTED]
Subject: Windows 2000 System State


Hello,

I am responsible for the administrative side of TSM and I am not very
familiar with the client side.  Does anyone know the answer to the following
question?

Can you tell me if TSM gets the System State on Windows 2000 Servers. The
system state is comprised of the following:
COMM+ Class Registration Database
Registry
System Files.

We would need the system state if we needed to restore a server or repair a
damaged server. If it does can you tell me how to identify the system state
in the TSM backups.

Thanks!!!

Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: Sizing of Bufpoolsize, everyone take note of this response

2003-01-19 Thread Seay, Paul
We are about to upgrade to a 6 x 750 MHZ from a 4 x 450 MHZ.  At that point,
I will consider doing more bufferpool.  The problem is the buffer processing
does use some CPU resources, especially during expiration.  So, I will be
able to model the benefits.  We are adding 2GB of memory (total 4GB), so I
may run my bufferpool up to a gigabyte.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Paul Ripke [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 17, 2003 8:48 PM
To: [EMAIL PROTECTED]
Subject: Re: Sizing of Bufpoolsize, everyone take note of this response


On Saturday, January 18, 2003, at 02:36 AM, Seay, Paul wrote:

> As Zorg would say, I know the sound of this music.
>
> The default maxperm is probably killing you.  I am guessing you are
> swapping
>
> 

I agree whole-heartedly! I'd go a step further, and knock maxperm down to
about 10%, or even less. Given the nature of TSM I/O, the AIX buffer cache
is going to be next to useless. It may then be possible to expand the TSM
bufpoolsize beyond recommendations... I've set mine to about 60-70% physical
RAM, up from 30-40%. The speed of selects and "q actlog" have increased by
an order of magnitude. Definitely stay below the level where the system
starts to swap, and don't go too large... TSM buffer cache management can
then become a bottleneck. How big is too large? No idea - it'd be very
dependent on hardware - experiment and see!

Cheers,
--
Paul Ripke
Unix/OpenVMS/DBA
101 reasons why you can't find your Sysadmin:
68: It's 9AM. He/She is not working that late.
-- Koos van den Hout



Re: Sizing of Bufpoolsize, everyone take note of this response

2003-01-17 Thread Seay, Paul
As Zorg would say, I know the sound of this music.

The default maxperm is probably killing you.  I am guessing you are swapping
more than you are running and your swap drives are I/O hot, a iostat will
tell you, or topas.  This value dictates the amount of storage that can be
consumed by (non-computational).  The default is 80 percent.  There was a
long discussion about this on the list about 3 months ago.  The way you
change maxperm is with vmtune.  My system is a 2GB system.  I have maxperm
set to 40.  When it lists the value it gives you the answer in K, not
percent at the top and in percent at the bottom.  This is how our vmtune is
setup.  This does not survive a boot, so after we were comfortable with it,
we put it in inittab.

Our buffpoolsize is 327680.

/usr/samples/kernel/vmtune -p10 -P40
/usr/samples/kernel/vmtune -F376
/usr/samples/kernel/vmtune -R256

These changes resulted in the following:
vmtune:  current values:
  -p  -P  -r -R-f
-F  -N-W
minperm  maxperm  minpgahead maxpgahead  minfree  maxfree  pd_npages
maxrandwrt
  52424209699   2 256 120  376
5242880

   -M  -w-k -c  -b -B
-u-l  -d
maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket
defps
419400   245766144   1   93  464  9
131072 1

-s -n -S-L
-g-h
sync_release_ilock  nokilluid  v_pinshm  lgpg_regions  lgpg_size
strict_maxperm
0   0  0 0
00

number of valid memory pages = 524249   maxperm=40.0% of real memory
maximum pinable=80.0% of real memoryminperm=10.0% of real memory
number of file memory pages = 396986numperm=75.7% of real memory

The other two changes for minfree and maxpgahead MUST be done in order.
What these do is significantly improve storage pool migration and database
backup speed if your disk are fast.  We have ESS, it really makes a
difference.

I subscribe to the general concensus.  Set maxperm and minperm really low on
a machine that acts only as a TSM server.

Let me put it in simple terms, you should see a 3 order of magnitude
improvement.  The system will probably run better than it ever has.  If I
remember correctly, our system performancd broke when we took the
buffpoolsize over about 96000.


Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: PAC Brion Arnaud [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 17, 2003 10:04 AM
To: [EMAIL PROTECTED]
Subject: Sizing of Bufpoolsize


Hi *SM fellows,

I'm running TSM 4.2.3.1 on an AIX 4.3.3 system (IBM 6h0) with 2 GB ram, the
db size is 21 GB, 75 % used, the logpool size is 12 GB. Since 3 weeks (when
we upgraded from 4.2.1.15), I get massive performance degradation : expire
inventory takes ages (approx 20 hours) to go through 9 million objects, and
cache hit ratio is between 94 and 96 %. I tried to increase my bufpoolsize
from 151 MB to 400 in several 100 GB steps without seeing any improvement,
and the system  is now heavily paging.
Could you please share your bufpoolsize settings with me, if working in the
same kind of environment, or give me some advice for tuning the server ?
Thanks in advance !

Arnaud

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



Re: TSM database question

2003-01-16 Thread Seay, Paul
Presuming the disk are SSA disk.  UNLOAD/LOAD is not CPU intensive based on
what I have seen from others, I think you can expect 6 hours to unload and 4
to 5 to load it back.  You can cancel the unload without having to restore
your database from backup a backup.  But, you are committed to the load once
you start it or a database restore.  So, you do want to do at least a
DBSnapshot then backup devconfig and backup volhistory before taking down
your system to do the unload/load so you can recover if the unload tape has
a problem during the load.

4.1.2 is a really old level of TSM, no support, you are on your own except
what folks can do for you from the list.

Good Luck.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Ruksana Siddiqui [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 16, 2003 6:40 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM database question


Paul,

I have a 3580 LIbrary and the TSM server is running in AIX 4.3.3 version on
TSM is 4.1.2 and we are in the process of updating it but I wanted to do a
unload and load of database before the update so that it's nice and clean.

The TSM server capacity is a SP node with 2 CPU's and 2 GB memory ( PHYSICAL
).

With regards,

-Original Message-----
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Friday, 17 January 2003 10:31
To: [EMAIL PROTECTED]
Subject: Re: TSM database question


It really depends on the hardware you have.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Ruksana Siddiqui [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 16, 2003 6:19 PM
To: [EMAIL PROTECTED]
Subject: TSM database question


I have TSM database of 28 GB pct tul : 62.4 . Maximum reduction in databse
is : 5 GB . Approximately how long will it take to do the unload and load of
the TSM database ?

With regards,

CAUTION - This message may contain privileged and confidential information
intended only for the use of the addressee named above. If you are not the
intended recipient of this message you are hereby notified that any use,
dissemination, distribution or reproduction of this message is prohibited.
If you have received this message in error please notify AMCOR immediately.
Any views expressed in this message are those of the individual sender and
may not necessarily reflect the views of AMCOR.

CAUTION - This message may contain privileged and confidential information
intended only for the use of the addressee named above. If you are not the
intended recipient of this message you are hereby notified that any use,
dissemination, distribution or reproduction of this message is prohibited.
If you have received this message in error please notify AMCOR immediately.
Any views expressed in this message are those of the individual sender and
may not necessarily reflect the views of AMCOR.



Re: TSM database question

2003-01-16 Thread Seay, Paul
It really depends on the hardware you have.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Ruksana Siddiqui [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 16, 2003 6:19 PM
To: [EMAIL PROTECTED]
Subject: TSM database question


I have TSM database of 28 GB pct tul : 62.4 . Maximum reduction in databse
is : 5 GB . Approximately how long will it take to do the unload and load of
the TSM database ?

With regards,

CAUTION - This message may contain privileged and confidential information
intended only for the use of the addressee named above. If you are not the
intended recipient of this message you are hereby notified that any use,
dissemination, distribution or reproduction of this message is prohibited.
If you have received this message in error please notify AMCOR immediately.
Any views expressed in this message are those of the individual sender and
may not necessarily reflect the views of AMCOR.



Re: Quick expiration question

2003-01-16 Thread Seay, Paul
Files that do not have an active version do not get rebound if you set them
to a new policy domain management class.  In fact, we have had to create
dummy files and run a backup with a special management class to get this
kind of data to rebind.

I really want a command that I can say rebind to a management class anything
with a specific mask  and date and lock the rebound entries from a client
rebind so that specific backup objects can be managed when business
exceptions come up after the fact.  The traditional full/incremental backup,
you would just change the retention of the tapes.  With what I am suggesting
you get exactly what you want, only the items changed and controlled under
adminstrator management.  Level of authority would be Policy Domain.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Adams, Matt (US - Hermitage) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 16, 2003 12:38 PM
To: [EMAIL PROTECTED]
Subject: Re: Quick expiration question


In light of this, what would be the best way to protect a node's data from
expiring at all??  To keep from both active and inactive versions expiring??
If we rename the node and filespace, would the time based retention rules
eventually get the data?? Or since the new (renamed) node name never has a
backup, we are ok.  Perhaps moving the node to a policy domain with
unlimited retention is the only way to protect both active and inactive
files??


Just trying to understand this more


Regards,

Matt Adams
Tivoli Storage Manager Team
Hermitage Site Tech
Deloitte and Touche USA LLP



-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 16, 2003 6:26 AM
To: [EMAIL PROTECTED]
Subject: Re: Quick expiration question


>Just a quickie. When an inremental backup runs, some files get expired.
>Does this mean that they are 'marked' for expiration, but only get
>deleted from the database when the Expire Inventory job is run?

File expiration candidates processing based upon versions (number of same
file) is performed during client Backups (in contrast to time-based
retention rules, which are processed during a later, separate Expiration).

  Richard Sims, BU
- This message (including any attachments) contains confidential information
intended for a specific individual and purpose, and is protected by law.  -
If you are not the intended recipient, you should delete this message and
are hereby notified that any disclosure, copying, or distribution of this
message, or the taking of any action based on it, is strictly prohibited.



Re: Need some help on sequence of "checkin libv ... search" comma nds

2003-01-16 Thread Seay, Paul
Tom, you are correct.  This is the way we do the checkin commands when DR
tapes return because we do a lot of move data commands.  What will happen is
tapes that have no data on them will end up being marked private in the
libvolumes table.  However, it is easy to fix.  All you have to do is an
update libv [library] [volume] status=scratch.  TSM will not mark a tape
scratch for reuse unless it has no data on it.  I have the following select
that I call my "SCRATCH FINDER" that will show you all the volumes that had
the private command issued first or have bad labels and TSM placed the
volumes in private status so it would not try to use them anymore.

select volume_name from libvolumes where status='Private' and
libvolumes.volume_name not in (select volume_name from volumes) and
libvolumes.volume_name not in (select volume_name from volhistory where type
in ('BACKUPFULL', 'BACKUPINCR', 'DBSNAPSHOT', 'EXPORT'))

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Kauffman, Tom [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 16, 2003 4:00 PM
To: [EMAIL PROTECTED]
Subject: Need some help on sequence of "checkin libv ... search" commands


I'm having a senior moment here while trying to revise my TSM server
recovery D/R documentation (we just sprung for a REAL library in the D/R
contract!). (3584 SCSI library)

It's been a l-o-n-g time since I've done this, I need to get the doc revised
this week, and I won't get to test until April.

Two questions:

1)After I do the 'audit library  checklabel=barcode' I need to do
two checkins; I seem to recall that the order is critical, and that I need
to do "checkin libv  search=yes checklabel=barcode status=scratch"
first and then the "checkin libv  search=yes checklabel=barcode
status=private" or I end up with no scratch tapes. But this is memory from
back in the ADSM 3.1 days . . .

2) When I do the database restore (dsmserv restore db dev=lto vol=
commit=yes) with a library, do I get a prompt to insert the tape in the I/O
station and reply, or do I get told to insert the tape in drive x within y
minutes?

TIA

Tom Kauffman
NIBCO, Inc



Re: Database page size?

2003-01-16 Thread Seay, Paul
There is only one size right now, 4096.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Farren Minns [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 16, 2003 5:37 AM
To: [EMAIL PROTECTED]
Subject: Database page size?


Hi TSMers

What is the default size of a page within the database?

Thanks

Farren - John Wiley & Sons Ltd



Re: Errors after upgrade from TSM 4.2.1.9 to TSM 4.2.3.0

2003-01-15 Thread Seay, Paul
You have to be at 4.2.3.2 and there is a special patch you may have to put
on if cleanup backupgroups fails at the 4.2.3.2 level.  Has a cleanup
backupgroups been run?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Michael Moore [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 15, 2003 3:40 PM
To: [EMAIL PROTECTED]
Subject: Re: Errors after upgrade from TSM 4.2.1.9 to TSM 4.2.3.0


I have not seen that one.  We went from 4.2.2.12 to v4.2.3 last week.

The only thing we are having a problem with are backupsets are taking
extremely long.  But this was an issue before going to v4.2.3.  Support said
the problem was with 'orphaned' system objects, and v4.2.3 would correct the
problem (after we ran "cleanup backupgroups").  Well, it didn't.  They are
now looking at other possibilites.

What were you doing when the error occured?  Backup, restore, reclaim?


Michael Moore
VF Services Inc.
105 Corporate Center Blvd.
Greensboro,  NC  27408
336.424.4423 (Voice)
336.424.4544 (Fax)
336.507.5199 (Pager)



  David Browne
  <[EMAIL PROTECTED]To:
[EMAIL PROTECTED]
  OM>  cc:
  Sent by: "ADSM:  Subject:  Errors after
upgrade from TSM 4.2.1.9 to TSM 4.2.3.0
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  01/15/03 08:24 AM
  Please respond to
  "ADSM: Dist Stor
  Manager"






I was running TSM 4.2.1.9 on OS390/2.10 and upgraded yesterday to TSM
4.2.3.0 and now I am receiving the following error messages:


ANRD SMNODE(2008): ThreadId<942> Duplicate object  encountered during
import or rename.

Anyone else have this same problem and what did you do to correct?



Re: Any side effects after upgrading TSM 4.2.1.9 to TSM 4.2.3.0

2003-01-15 Thread Seay, Paul
4.2.3.0 is not good enough.  You need to go to 4.2.3.2.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Charakondala, Chandrasekhar R. [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 08, 2003 4:19 AM
To: [EMAIL PROTECTED]
Subject: Any side effects after upgrading TSM 4.2.1.9 to TSM 4.2.3.0


Hi all,

Here we are going to upgrade our TSM 4.2.1.9 server running on AIX, first we
are going to upgrade TSM 4.2.2.0 and then to TSM 4.2.3.0 same day, And I
would like to know any side effects or problems after upgradation.

I will appreciate your valuable information.

Regards,
C.R.Chandrasekhar.
Systems Executive.
TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore. Phone No:
91-80-5536113 Ext:3032. Email:[EMAIL PROTECTED]



**
This message and any attachments are intended for the individual or entity
named above. If you are not the intended recipient, please do not forward,
copy, print, use or disclose this communication to others; also please
notify the sender by replying to this message, and then delete it from your
system.

The Timken Company
**



Re: recovery log filling up rapidly: Please help: EMERGENCY!!!! !

2003-01-15 Thread Seay, Paul
Run an incremental to dump the log.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 15, 2003 7:02 AM
To: [EMAIL PROTECTED]
Subject: recovery log filling up rapidly: Please help: EMERGENCY!


My recovery log is filling up at a rapid rate.  I am running TSM at 4.1.3 on
the mainframe.  It is 4.6 GB and my DB is 48 GB with 46 GB in use.  I am not
running expiration of the inventory yet due to a previous deletion of a
nodes data, could that be the cause?  I noticed that the log is reaching
about 95% full before I run another full backup.  I usually run a full DB
backup every day at 4:30.  What can I do?  I can't increase the recovery log
because I guess on this version it can only go to 5 GB?  What version is
this no longer true for?  I really need help and I can't find anyone from
IBM to return my calls.  My previous problem with the recovery log is no
longer true.  It does now reset back to 0% after a full backup, but I am
running out of space.  I do have the log fully extended and with the 5 GB
limit I am stuck.  Also, I can't set up a DB backup trigger because I am on
the mainframe and that feature of TSM is not available.  Thank you in
advance for any help you can give me


Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: AIX TSM Server, Client, and StgAgent Recommendation: 4.2.x

2003-01-13 Thread Seay, Paul
4.2.3.2 for the Server and latest client available.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Hokanson, Mark [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 13, 2003 11:00 AM
To: [EMAIL PROTECTED]
Subject: AIX TSM Server, Client, and StgAgent Recommendation: 4.2.x


Currently:(TSM Server v4.2.1.9, TSM Client v4.2.1.25, TSM Storage Agent
v4.2.1.9 )

What is the recommendation if you don't want to upgrade yet to v5.1.x?

Mark Hokanson
Thomson Legal and Regulatory



Re: To Collate or Not to Collate?

2003-01-13 Thread Seay, Paul
How many tapes do you have in the tapepool?

Select count(*) from volumes where stgpool_name='TAPEPOOL'

How many drives do you have?

How many mounts are you getting during a stgpool backup?

Do you have maxpr specified on the stgpool backup?

What is the device class mount retention?

Are you using a maximum scratch number for tapepool or are you using private
volumes?

Send a:
q stg tapepool f=d
q devclass [device class] f=d
q stg copypool f=d

Also, run this select to see if you have a lot of private volumes in the
library but not owned by any storage pool.  This happens when a tape is not
labeled.

select volume_name from libvolumes where status='Private' and
libvolumes.volume_name not in (select volume_name from volumes) and
libvolumes.volume_name not in (select volume_name from volhistory where type
in ('BACKUPFULL', 'BACKUPINCR', 'DBSNAPSHOT', 'EXPORT'))

With this information I may be able to help show that the collocation is
maybe not the total culprit.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Theresa Sarver [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 13, 2003 4:06 PM
To: [EMAIL PROTECTED]
Subject: To Collate or Not to Collate?


Hi All;

Environment:
SP Complex (7 wide nodes) running AIX 433 ML10
TSM 415
IBM 3494 (2-3590E drives)

When this was setup a couple years ago, they chose to enable collation, I
understand why - however, scratch tapes are now at a premium.  I would like
to know if I disabled collation if that would help to free up some
tapes...and can it be disabled on the fly?  Or is my only solution to start
trimming backups?

Also, our "tapepool to copypool" backup is now running over 24 hours.  Has
anyone else out there run into this problem before?  And if so - how did you
get around it?  Oh, and I'll save you the time...Management refuses to buy
another drive.  - Time to get creative!

Thank you for your assistance;
Theresa



Re: Reassignment of existing backup filespaces

2003-01-13 Thread Seay, Paul
The best that you can do is a move node data under V5.1 as far as I know.
What you do is get the data in a different primary pool and give them the
primary tapes for those file spaces and a copy of the database.  The node
name does not change in this scenario.  If you need to keep a copy of the
data, then create a copy of the new primary pool in its own copy pool and
once the tapes are removed do a restore volume for each volume in that
primary pool.

This is not much better than what you had before, but it may help.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: David le Blanc [mailto:[EMAIL PROTECTED]]
Sent: Sunday, January 12, 2003 10:40 PM
To: [EMAIL PROTECTED]
Subject: Reassignment of existing backup filespaces


Hi TSMers,

I'm in the position of recommending a solution to a sticky problem.

A customer has sold off some infrastructure and applications, and wishes all
backups associated with that infrastructure to go along with the hardware.

Unfortunately, the hardware is running some applications which are not
leaving, but are in-fact being rehosted before the equipment leaves.

*ASSUMING* the data I want to keep is in a particular file system (that is,
no need to worry about sub-filesystem granularity) I can 'export node' the
appropriate filespaces and re-import them under an alternate node name.
Then when the new node has these filespaces, I can delete the original and
give away the media and a copy of the database.

The re-hosted application will then automatically back up to the imported
filespaces under the new host name, and no historical information will be
lost.

The $US10Million question is :-

Can I reassign a file space from NodeX to NodeY without going through the
'export/import' debacle?  There is after all, almost 400 tapes going back I
don't know how many years containing data from some number of terabytes of
IBM ESS storage.

Is there a user-land or 'hidden' command to reassign a filespace from NodeX
to NodeY?

If not, can anyone make a better suggestion on how to do this?

Cheers.
David



Re: Roman Numerals

2003-01-12 Thread Seay, Paul
This is an English/American thing based off the ROMAN Numerals.

I = 1
V = 5
X = 10
L = 50
C = 100
M = 1000

2003 is written:
MMIII

9 is written:
IX

40 is written:
XL

80 is written:
LXXX

1990 is written:
MCMXC

1998 is written:
MCMXCVIII

You used to see Roman Numerals used on the movie credits for the year all
the time if they were MGM movies, but it has been dropped pretty much now.


Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Hamish Marson [mailto:[EMAIL PROTECTED]]
Sent: Sunday, January 12, 2003 5:41 PM
To: [EMAIL PROTECTED]
Subject: Re: Calculate 1 MB in TSM


Seay, Paul wrote:

>Yeah, Roman numeral "M" is a 40+ year practice that sales people used
>at the wholesale and manufacturing levels as kind of a shorthand for
>1000.  As they have migrated to computers this has mostly gone away
>because the quantity fields only supported "ea" items standing for
>"each".
>
>

In which country? I've never seen M used as a thousand. Only k (Lower case).

H

--

I don't suffer from Insanity... | Linux User #16396
I enjoy every minute of it...   |
|
http://www.travellingkiwi.com/  |



Re: Calculate 1 MB in TSM

2003-01-12 Thread Seay, Paul
Yeah, Roman numeral "M" is a 40+ year practice that sales people used at the
wholesale and manufacturing levels as kind of a shorthand for 1000.  As they
have migrated to computers this has mostly gone away because the quantity
fields only supported "ea" items standing for "each".

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Sunday, January 12, 2003 7:09 AM
To: [EMAIL PROTECTED]
Subject: Re: Calculate 1 MB in TSM


>Unfortunately, our bean counters started using 60K for $60,000.  When
>the mainframe 3380 came out they decided to start counting disk in
>1000s and 100s.  This has just perpetuated on for disk to make it
>easy for people to calculate the space required because data records
>are measured in base 10 not base 2.  So, we are stuck with it.  TSM
>uses the open systems calculation, 1024.

Some businesses further confuse things by internally using the letter M to
denote 1000, in unit quantities in satisfying orders.  They should be
required to perform all their math in Roman numerals.

  Richard Sims, BU



Re: Calculate 1 MB in TSM

2003-01-11 Thread Seay, Paul
Unfortunately, our bean counters started using 60K for $60,000.  When the
mainframe 3380 came out they decided to start counting disk in 1000s and
100s.  This has just perpetuated on for disk to make it easy for people
to calculate the space required because data records are measured in base 10
not base 2.  So, we are stuck with it.  TSM uses the open systems
calculation, 1024.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Hamish Marson [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 10, 2003 5:43 PM
To: [EMAIL PROTECTED]
Subject: Re: Calculate 1 MB in TSM


Braich, Raminder wrote:

>I cannot imagine how people reach 1000 kb or 1000 mb figure. These are
>always calculated as 2 ^ x where x=0...infinity. I believe every
>calculation in software is done using the power of 2. 1000 could be
>used by sales as others suggested but how come they justify 1000 kb is
>bigger than 1024 kb !
>
>1 byte=8 bits
>1 KB = 1024 bits
>1 MB = 1024 KB = 1024*1024=1048576 bits
>1 GB = 1024 MB =1024*1048576=1073741824 bits
>
>

SInce you''re being pedantic, I think I will to (But then I like to anyway).
(Not CASE is very important)

 1 Byte = 8 bits
 1 kB = 1024 Bytes
 1 MB = 1024 kB = 1024 * 1024 Bytes
 1 GB = 1024 * 1024 * 1024 Bytes

m - milli
c - centi
d - deci (Not often used I admit)
k - kilo
M - Mega
G - Giga

b - bit
B - Byte

These are particularly important when you start swapping between serial
speeds (Usually quoted in bits) and Size (Usually quoted in Bytes). e.g.
EThernet is most commonly 100Mbps (100 Mega bits) while 100MB is 100 Mega
Bytes. 8* larger...

Hamish

>Raminder Braich
>
>-Original Message-
>From: J:rgen Opitz [mailto:[EMAIL PROTECTED]]
>Sent: Friday, January 10, 2003 11:23 AM
>To: [EMAIL PROTECTED]
>Subject: Calculate 1 MB in TSM
>
>
>Hello TSM'ers,
>
>can anyone tell me how TSM calculate 1MB? Is it 1000KB or 1024KB.
>
>Thanks in advance.
>J|rgen
>
>
>Mit freundlichen Gruessen
>
>Juergen Opitz
>Rheinmetall Informationssysteme GmbH
>- Competence Center Rechenzentrum -
>Alfred Pierburgstr. 1, 41460 Neuss
>
>Telefon:  02132/131269, Fax: 02131/5202681, Mobil:
>Mail: [EMAIL PROTECTED]
>http://www.ris.de
>
>Aus Rechts- und Sicherheitsgruenden ist die in dieser E-Mail gegebene
>Information nicht rechtsverbindlich. Eine rechtsverbindliche
>Bestaetigung reichen wir Ihnen gerne auf Anforderung in schriftlicher
>Form nach. Beachten Sie bitte, dass jede Form der unautorisierten
>Nutzung, Veroeffentlichung, Vervielfaeltigung oder Weitergabe des
>Inhalts dieser E-Mail nicht gestattet ist.Diese Nachricht  ist
>ausschliesslich fuer den bezeichneten Adressaten oder dessen Vertreter
>bestimmt. Sollten Sie nicht der vorgesehene Adressat dieser E-Mail oder
>dessen Vertreter sein, so bitten wir Sie, sich mit dem Absender der
>E-Mail in Verbindung zu setzen.
>
>For legal and security reasons the information provided in this e-mail
>is not legally binding. Upon request we would be pleased to provide you
>with a
>
>legally binding confirmation in written form. Any form of unauthorised
>use, publication, reproduction, copying or disclosure of the content of
>this e-mail is not permitted. This message is exclusively for the
>person addressed or their representative. If you are not the intended
>recipient of this message and its contents, please notify the sender
>immediately.
>
>


--

I don't suffer from Insanity... | Linux User #16396
I enjoy every minute of it...   |
|
http://www.travellingkiwi.com/  |



Re: AUDITDB reports same "Processed database entries>" ever y time

2003-01-11 Thread Seay, Paul
Do you have Windows 2000 clients?  If so, this audit is not going to cleanup
all of the system object related stuff.  It takes 4.2.3.2 to do that and the
cleanup backupgroups command is the recommended approach vs. an AUDITDB.

Support and Development are pretty adamant about not running AUDITDB unless
you have a real problem.  What problem are you trying to solve?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Jurjen Oskam [mailto:[EMAIL PROTECTED]]
Sent: Saturday, January 11, 2003 7:45 AM
To: [EMAIL PROTECTED]
Subject: AUDITDB reports same "Processed  database entries>" every time


Hi everybody,

I'm running an AUDITDB on our TSM database (3 GB, 4.2.2.13 server on AIX
4.3.3 ML10).

I have used the FILE= parameter to redirect the output to a file. The
auditdb started off OK, but for the last hour it has been reporting
the same amount of processed database entries:

ANR4140I AUDITDB: Database audit process started.
ANR4075I AUDITDB: Auditing policy definitions.
ANR4040I AUDITDB: Auditing client node and administrator definitions.
ANR4135I AUDITDB: Auditing central scheduler definitions.
ANR3470I AUDITDB: Auditing enterprise configuration definitions.
ANR2833I AUDITDB: Auditing license definitions.
ANR4136I AUDITDB: Auditing server inventory.
ANR4138I AUDITDB: Auditing inventory backup objects.
ANR4307I AUDITDB: Auditing inventory external space-managed objects.
ANR4139I AUDITDB: Auditing inventory archive objects.
ANR4137I AUDITDB: Auditing inventory file spaces.
ANR4310I AUDITDB: Auditing inventory space-managed objects.
ANR4306I AUDITDB: Processed 63092 database entries (cumulative).
ANR4306I AUDITDB: Processed 141783 database entries (cumulative).
ANR4306I AUDITDB: Processed 202646 database entries (cumulative).
ANR4306I AUDITDB: Processed 253108 database entries (cumulative).
ANR4306I AUDITDB: Processed 319353 database entries (cumulative).
ANR4306I AUDITDB: Processed 321014 database entries (cumulative).
ANR4306I AUDITDB: Processed 321014 database entries (cumulative).
ANR4306I AUDITDB: Processed 321014 database entries (cumulative).
ANR4306I AUDITDB: Processed 321014 database entries (cumulative).
ANR4306I AUDITDB: Processed 321014 database entries (cumulative).

 ... and so on, with dozens of copies of the last line, ie. it doesn't
seem to be progressing past 321014 database pages. The dsmserv process
still consumes CPU time (this is a 4-CPU machine, and the dsmserv process
consumes about 1 CPU worth of CPU time), but iostat on the volumes the
database is located on is showing very little I/O activity: every few
seconds a little burst of about 50 KB read or write activity.

The database- and log-files do have increasing timestamps, so they are
being written to.


For a DR-test a few weeks ago, I have restored this database to a little
RS/6000 (512MB RAM, one internal SCSI drive, 333MHz CPU), and audited that
database: it took about nine hours. The production machine is on an
7026-M80 with 4 CPUs and 2 GB RAM.


I am worried about the number of processed databage pages remaining
constant. Did anybody see this before? I have searched the archives
but couldn't find this.

Thanks,
--
Jurjen Oskam

PGP Key available at http://www.stupendous.org/



Re: Storage Agent Error: ANR9999D shmcomm.c(1690)

2003-01-11 Thread Seay, Paul
I think you will have to run the dsmc as root.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Hokanson, Mark [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 10, 2003 3:22 PM
To: [EMAIL PROTECTED]
Subject: Storage Agent Error: ANRD shmcomm.c(1690)


Storage Management Server v4.2.1.9 (LANFree)
AIX TSM client v4.2.1.25

I get this error when I issue dsmc and query the client schedule.

tsm> q sched
ANRD shmcomm.c(1690): ThreadId<13> shm_init: Socket read failure...br
3,errno 0 ANR8294W Shared Memory session unable to initialize. "xxoo
schedule info xxoo"

Has anyone seen this type of error before?

Mark Hokanson
Thomson Legal and Regulatory



Re: Stability of TSM

2003-01-10 Thread Seay, Paul
5.1.6.0 is new and has one serious known problem.

5.1.5.4 was suggested by support a few weeks ago as being pretty good.
4.2.3.2 is the recommended intermediate upgrade path before going to 5.1.5.4
especially if you have windows system objects which were introduced in some
level of 4.2.2.x with lots of bugs that really are not completely fixed
until 4.2.3.2 and supposedly 5.1.5.2.

Coming from 4.1.2, you should probably put on 4.2.3.2, then jump to 5.1.5.4
or higher, but not 5.1.6 until the one nasty problem is fixed.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Nelson Kane [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 08, 2003 11:16 AM
To: [EMAIL PROTECTED]
Subject: Stability of TSM


Hello Fellow TSMer's,
I have been trying to postpone upgrading my TSM Server for as long as
possible. I'm a firm believer in "if it ain't broke, don't fix it!", but i'm
starting to get funny errors. Anyways, I'm at 4.1.2, and I have been reading
many issues with each 5.X release. I would like to know, from everyone's
experience, what version is the most stable thus far? and can you recommend
a site for upgrade docs. Thanks in advance -kane



Re: Is there a limit on the amount of data an NT client can backu p ?

2003-01-10 Thread Seay, Paul
Keith, we worked with Tivoli to fix this problem.  There was a memory leak
in the ACL processing routine.  The latest TSM Client for SGI fixed this
problem and a previous patch level has the fix.  I believe the current level
is 4.2.3.0.

ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/
client/v4r2/SGI/v423/

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Keith Kwiatek [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 10, 2003 9:46 AM
To: [EMAIL PROTECTED]
Subject: Re: Is there a limit on the amount of data an NT client can backup
?
Importance: High


Hello IBM,

We have an sgi client with about 3 filesystems of about 500 gig each, and
zillions of files. Can't get them to back up. Same situation, scheduler
drifts away. No error messages.

We are thinking that the scheduler building of the index blows a gasket
somewhere, and that maybe using the virtual filespace option would break it
up into smaller more manageable indexes? Any thoughts?

Keith Kwiatek
National Institute of Standards and Technology


- Original Message -
From: "Andrew Raibeck" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, January 10, 2003 7:07 AM
Subject: Re: Is there a limit on the amount of data an NT client can backup
?


> There are not known limitations on the amount of data to be backed up
> that would contribute to a hang. Network problems are one fairly
> common cause of hangs, though I would think you would eventually get
> some kind of network error. You might consider upgrading to the latest
> TSM client patch level (5.1.5.7) to see if the problem may have been
> addressed via a bug fix. Beyond that, if the problem persists, I would
> recommend that you contact IBM support for assistance.
>
> Regards,
>
> Andy
>
> Andy Raibeck
> IBM Software Group
> Tivoli Storage Manager Client Development
> Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS Internet
> e-mail: [EMAIL PROTECTED] (change eye to i to reply)
>
> The only dumb question is the one that goes unasked.
> The command line is your friend.
> "Good enough" is the enemy of excellence.
>
>
>
>
> Nick Rutherford <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 01/10/2003
> 03:15 Please respond to "ADSM: Dist Stor Manager"
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Is there a limit on the amount of data an NT
client can backup ?
>
>
>
> Hello all,
>
> we have a TSM client installed on an NT server that has about 250gb of
> IBM ESS storage attached to it.
> Recently we have experienced problems with the scheduler service which
> appears
> to hang during the backup.
> The client schedule and error logs do not contain any information.
>
> The TSM server is V5.1.1.0
> The TSM client level is V5.1.1.0
>
> The NT server is V4 SP6.
> The NT server also runs Notes V5.
>
> Has anybody seen similar problems when backing up such a large volume
> of data ? Any help is appreciated.
>
> regards,
>
> Nick Rutherford.
>
>
> *
> This e-mail is confidential and intended solely for the use of the
> individual to whom it is addressed.  Any views or opinions presented
> are solely those of the author and do not necessarily represent those
> of Honda of the UK Manufacturing Ltd.
>
> If you are not the intended recipient please notify the sender
> immediately by return e-mail and then delete this message from your
> system.  Also be advised that any use, disclosure, forwarding,
> printing or copying of this e-mail if sent in error is strictly
> prohibited.  Thank you for your co-operation.
> *



Re: move drmedia but tapes werent ejected

2003-01-08 Thread Seay, Paul
What do you mean, "they are in the inport".  The 3494 should put the tapes
away with a category of FF00 and then you just check them in.  You should be
able to do a mtlib command to see their status.

I believe you are getting confused with the J on the tape.  That is not part
of the serial number.  That is the tape type.  The serial number should be
the next 6 digits.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 07, 2003 11:25 AM
To: [EMAIL PROTECTED]
Subject: Re: move drmedia but tapes werent ejected


i cant quite seem to get this to work

im using a 3494 automated library with the following commands:

checkin libv 3494 j00544 status=private owner=sy00055 devtype=3590.. but
that just waits for it in the inport (messages to insert in actlog)

so i try to search...

checkin libv 3494 search=yes status=private devtype=3590
volrange=j00544,j00544... it returns immediately saying "0 found"

-Original Message-
From: LeBlanc, Patricia [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 07, 2003 9:44 AM
To: [EMAIL PROTECTED]
Subject: Re: move drmedia but tapes werent ejected


just do a checkin, and a checkout.

-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 07, 2003 9:32 AM
To: [EMAIL PROTECTED]
Subject: move drmedia but tapes werent ejected


we're running tsm 4.2 on aix 4.3.3. we run a daily move drmedia but in this
case 3 tapes did not get ejected because they were mounted. however, the
move drmedia command made changes to make it look like they are no longer in
the database. short of actually opening up the library and removing them
(its a BIG library) how can we either "reintroduce" the tapes and then eject
them or just eject them?



Re: Gigabit Ethernet speed

2002-12-31 Thread Seay, Paul
My point here was yes, if you have the CPU on the machines to do it.  If you
are talking about between a single client and TSM Server, this really only
applies to TDP backups that allow multiple node definitions for the threads.
This is a long complex discussion.

The more common usage is to have multiple gigabit cards in the TSM server
and split the clients among the TSM server IP addresses for the gigabit
cards.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Antonio J Pires [mailto:[EMAIL PROTECTED]] 
Sent: Monday, December 30, 2002 6:43 AM
To: [EMAIL PROTECTED]
Subject: Re: Gigabit Ethernet speed


Hi,

Can we use 2 or more gigabit interfaces to increase backup bandwidth? Any
experience on this approach?

Thanks in advance,
António Pires



 

  "Seay, Paul"

  cc:

  Sent by: "ADSM:  Subject:  Re: Gigabit
Ethernet speed 
  Dist Stor

  Manager"

  <[EMAIL PROTECTED]

  .EDU>

 

 

  28-12-2002 08:36

  Please respond to

  "ADSM: Dist Stor

  Manager"

 

 




I agree.  However, to my knowledge that is either the default for the P660
offering or there is no way to set it.  We use hardware on our Windows
machines.  Even then, you have to process the packet queues and when moving
70MB/sec of 1500 byte packets, it takes some resources.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 27, 2002 8:27 PM
To: [EMAIL PROTECTED]
Subject: Re: Gigabit Ethernet speed


Consider using NIC's with TCP Offload Engines (TOE'S)- this should help with
CPU utilization.

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: December 27, 2002 2:53 PM
To: [EMAIL PROTECTED]
Subject: Re: Gigabit Ethernet speed

For some reason we forget that IP packet processing on the client and server
take a lot of CPU resources.  Check your CPU utilization on both to see what
is happening.  My experience is a 450mhz x 4 P660 can process maximum of
about 7 kb/sec.  When we had only one gigabit interface that is what it
ran at and also with 2 gigabit interfaces in total.  In both cases the CPU
goes to 100 percent and no more packets can be processed.

It can also be a client issue if they do not have enough CPU to push the
data to the server.

This is the place LANFREE comes into play.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 24, 2002 9:44 AM
To: [EMAIL PROTECTED]
Subject: Gigabit Ethernet speed


anybody have any tips on what type of "Network transfer rate" we should be
seeing for our backups over gigabit ethernet? the clients backup via copper
Gb ethernet dedicated for backups to a tsm server connected to a 3494 tape
library with 10 3590 tape drives via fibre channel/brocade switch.

there is only one client on one gigabit interface and 3 on another.

any tips for improving network performance?

thanks

Steven A. Conko
Senior Unix Systems Administrator
ADT Security Services, Inc.



Re: Gigabit Ethernet speed

2002-12-28 Thread Seay, Paul
I agree.  However, to my knowledge that is either the default for the P660
offering or there is no way to set it.  We use hardware on our Windows
machines.  Even then, you have to process the packet queues and when moving
70MB/sec of 1500 byte packets, it takes some resources.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 27, 2002 8:27 PM
To: [EMAIL PROTECTED]
Subject: Re: Gigabit Ethernet speed


Consider using NIC's with TCP Offload Engines (TOE'S)- this should help with
CPU utilization.

-Original Message-----
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: December 27, 2002 2:53 PM
To: [EMAIL PROTECTED]
Subject: Re: Gigabit Ethernet speed

For some reason we forget that IP packet processing on the client and server
take a lot of CPU resources.  Check your CPU utilization on both to see what
is happening.  My experience is a 450mhz x 4 P660 can process maximum of
about 7 kb/sec.  When we had only one gigabit interface that is what it
ran at and also with 2 gigabit interfaces in total.  In both cases the CPU
goes to 100 percent and no more packets can be processed.

It can also be a client issue if they do not have enough CPU to push the
data to the server.

This is the place LANFREE comes into play.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 24, 2002 9:44 AM
To: [EMAIL PROTECTED]
Subject: Gigabit Ethernet speed


anybody have any tips on what type of "Network transfer rate" we should be
seeing for our backups over gigabit ethernet? the clients backup via copper
Gb ethernet dedicated for backups to a tsm server connected to a 3494 tape
library with 10 3590 tape drives via fibre channel/brocade switch.

there is only one client on one gigabit interface and 3 on another.

any tips for improving network performance?

thanks

Steven A. Conko
Senior Unix Systems Administrator
ADT Security Services, Inc.



Re: Gigabit Ethernet speed

2002-12-27 Thread Seay, Paul
For some reason we forget that IP packet processing on the client and server
take a lot of CPU resources.  Check your CPU utilization on both to see what
is happening.  My experience is a 450mhz x 4 P660 can process maximum of
about 7 kb/sec.  When we had only one gigabit interface that is what it
ran at and also with 2 gigabit interfaces in total.  In both cases the CPU
goes to 100 percent and no more packets can be processed.

It can also be a client issue if they do not have enough CPU to push the
data to the server.

This is the place LANFREE comes into play.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 24, 2002 9:44 AM
To: [EMAIL PROTECTED]
Subject: Gigabit Ethernet speed


anybody have any tips on what type of "Network transfer rate" we should be
seeing for our backups over gigabit ethernet? the clients backup via copper
Gb ethernet dedicated for backups to a tsm server connected to a 3494 tape
library with 10 3590 tape drives via fibre channel/brocade switch.

there is only one client on one gigabit interface and 3 on another.

any tips for improving network performance?

thanks

Steven A. Conko
Senior Unix Systems Administrator
ADT Security Services, Inc.



Re: incrbydate option

2002-12-23 Thread Seay, Paul
This subject looked like burnt potatoes the last time it was discussed.  It
is finished being discussed here, the archives are the answer!

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 20, 2002 1:31 AM
To: [EMAIL PROTECTED]
Subject: Re: incrbydate option


Thomas Denier wrote:

>The documentation for the incrbydate options states that it will "back
>up new and changed files with a modification date later than the last
>incremental backup stored at the server". This is somewhere between
>ambiguous and incoherent. An incremental backup takes place over a time
>interval, not at a specific moment. Does anyone know exactly what time
>is compared to the file modification times?
>
>The clocks on most if not all of our client systems are synchonized to
>an NNTP server. The clock on our OS/390 TSM server is synchronized to
>the operator's wristwatch. As a result, the mainframe clock is often as
>much as several minutes out of synchronization with the client clocks.
>How does this affect the behavior of the incrbydate option?
>
>I am getting extremely tired of spending time dealing with amgiguities,
>omissions, and outright errors in the TSM documentation.
>
>
Thomas,

Rather than re-hash an old subject, may I suggest you look in the archives
of 3 or 4 months ago. There was a rather long thread that covered this
subject at length. However, I will give you the short version. Every time
you do a regular incremental backup there is a time
stamp(TS) that is set in ITSM DB. It is that TS that is used by the
INCRBYDATE option. It simply checks to see if the file modification time is
newer than the TS I just mentioned. There are many draw backs to using
INCRBYDATE, if you don't need it I recommend that you don't use it.

On another note a strongly agree with what Richard said about giving
feedback to the Documentation group.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux Red Hat
Certified Engineer, RHCE

===



Re: TDP R/3

2002-12-23 Thread Seay, Paul
Just being able to use the extended SAPDBA utilities that TDP provides the
hooks for is a plus.  No SAPDBA should leave home without them.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Bill Zhang [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 20, 2002 11:48 AM
To: [EMAIL PROTECTED]
Subject: Re: TDP R/3


Doug,

Can a online backup backup open database files?

Can parallel backup paths, multi-thread be one of the
benefits?

Thanks.

Bill


--- "Nelson, Doug" <[EMAIL PROTECTED]> wrote:
> TDP can backup open database files. In a 24x7
> environment, TDP is essential. If you have quiet
> periods where you can do a dynamic backup (and this
> is appropriate), or an export and backup, then you
> don't need it.
>
> Douglas C. Nelson
> Distributed Computing Consultant
> Alltel Information Services
> Chittenden Data Center
> 2 Burlington Square
> Burlington, Vt. 05401
> 802-660-2336
>
>
> -Original Message-
> From: Bill Zhang [mailto:[EMAIL PROTECTED]]
> Sent: Friday, December 20, 2002 11:01 AM
> To: [EMAIL PROTECTED]
> Subject: TDP R/3
>
>
> Can somebody explain me why use TDP for R/3 to
> backup
> SAP DB2 database while you can backup DB2
> offline/online directly to TSM?
> What are the binifits?
>
> Thanks a lot.
>
> Bill
>
> __
> Do you Yahoo!?
> Yahoo! Mail Plus - Powerful. Affordable. Sign up
> now.
> http://mailplus.yahoo.com


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com



Re: Idle system fails with Media mount not possible

2002-12-20 Thread Seay, Paul
YES, if the transaction size is too large on the client this could happen.
What is your MAXSIZE set to on the disk pool?  That stops this from
happening.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 10:38 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


well, by using the piecemeal method of successfully individually backing up
everything that failed i have finally been able to run a successfull system
wide backup. strange. is it possible with a 1GB filesize limit on the
diskpool that several files added up to more than 1 gb? would that cause a
problem?

-Original Message-
From: Kent Monthei [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 9:59 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


On the client, do a 'dsmc show opt' and a 'dsmc q inclexcl' to check for
stray management-class bindings that are different from your other working
clients.  If all your clients don't belong to a common domain, then check
the server for missing copygroup definitions, incorrect target pool, target
pool with no associated volume, and/or policysets that haven't been
'activate'd.





"Conko, Steven" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 18-Dec-2002 09:29
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Idle system fails with Media mount not possible

The strange thing as that other clients are backing up fine... these are
three very similar, nearly identical clients, all configured exactly the
same. the others backup fine, and this incremental keeps failing in the same
spot with the media mount not possible.

-Original Message-
From: Robert L. Rippy [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 9:12 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


I've also seen this where Unix hasn't released the drives from the OS. TSM
seems them as available but not Unix. I have had to delete the drive and
redefine so that they can be used. It was a bug in a certain code of TSM but
I can't remember which one or even if that is truly your problem.

Thanks,
Rob.



From: "Conko, Steven" <[EMAIL PROTECTED]> on 12/18/2002 09:03 AM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:
Subject:  Re: Idle system fails with Media mount not possible

the client was already set to 2 mount points. the storage pool does not fill
up. any other suggestions?

-Original Message-
From: Robert L. Rippy [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 7:47 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


Query the node with F=d and change Number of Mount points from 0 to at least
number of drives you have in library. What happened is stgpool probally
filled up and the backup tried to continue on tape, but with Number of Mount
points set to 0 said no and it failed.

Thanks,
Robert Rippy




From: "Conko, Steven" <[EMAIL PROTECTED]> on 12/17/2002 04:19 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:
Subject:  Idle system fails with Media mount not possible

strange one... and ive looked at everything i can think of.

In client dsmerror.log:

12/17/02   15:01:54 ANS1228E Sending of object
'/tibco/logs/hawk/log/Hawk4.log' failed
12/17/02   15:01:54 ANS1312E Server media mount not possible

12/17/02   15:01:57 ANS1312E Server media mount not possible



In activity log:

ANR0535W Transaction failed for session 1356 for node
SY00113 (AIX) - insufficient mount points available to
satisfy the request.


There is NOTHING else running on this TSM server. All 6 drives are online.
The backup is going to a 18GB diskpool that is 8% full, there are plenty of
scratch tapes, i set max mount points to 2. keep mount point=yes. it starts
backing up the system then just fails... always at the same point. the file
its trying to back up does not exceed the max size. all drives are empty,
online. diskpool is online. i see the sessions start and then just after a
minute or 2 just abort.

any ideas?



Re: Many, almost empty volumes ??

2002-12-20 Thread Seay, Paul
The short answer is probably yes.  TSM does some checking of files that span
tapes in the reclamation criteria to prevent an endless chain of
reclamation.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: John [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 11:21 AM
To: [EMAIL PROTECTED]
Subject: Many, almost empty volumes ??


Hi TSM'rs

I am sure someone has had this before..

TSM Server 4.2 (Windows 2000) connected to a SAN based 18 slot LTO 3583.

The total amount of storage is around 60GB which means there should never
really be more that two, or at worst three tapes in the offsite pool. I have
five. These are data tapes, not DBBackup tapes.

Even after doing reclamation of the offsite pool the three tapes in question
remain offsite and remain at 0.5, 2.1, 0.6 percent used.

Is this normal. I was under the impression that reclamation is supposed to
avoid this.

Would appreciate any thoughts on this

Rgds
John



Re: Select statements syntax: The Answer

2002-12-20 Thread Seay, Paul
Try this on for size.

select node_name, cast(platform_name as char(16)) as "OS/Name  ",
cast(client_os_level as char(10)) as "OS/Level", cast (client_version as
char(1)) || '.' || cast(client_release as char(1)) || '.' || cast
(client_level as char(1)) || '.' || trim(cast(client_sublevel as char(2)))
as "Level" from nodes order by 2,4,1

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 1:13 PM
To: [EMAIL PROTECTED]
Subject: Select statements syntax


I need to create a select statement that concatenates numeric fields as text
-- I need to create a single text field from the four numeric fields in the
NODES table into a single text field.

Once I get the fields CAST as CHAR type, how do I concatenate them?

I tried SUBSTR but it pukes on too many arguments.

This works:

select substr(cast(client_version as char(1)),1,1) from nodes

This doesn't:

select substr(cast(client_version as char(1)),cast(client_release as
char(1)),1,1) from nodes

String concatenation is a basic function, I know it's there, I just can't
remember the function name.



Re: SELECT statement to determine if any drives available

2002-12-20 Thread Seay, Paul
Actually, the field DRIVES table, ALLOCATED_TO has the information.

Select drive_name, device_type from drives where allocated_to IS NULL

This works on V4.2.  This area dramatically changed on V5.  I do not have a
V5 system to look at what is required to get the same answer.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Christian Sandfeld [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 19, 2002 7:34 AM
To: [EMAIL PROTECTED]
Subject: SELECT statement to determine if any drives available


Hi list,

Does anybody know of a SELECT statement that will show if any drives are
available for use (not in use by another process or session)?

I need this for scripting purposes as I have only two drives in my library,
and while doing reclamation both drives are in use.


Kind regards,

Christian



FW: Select Statement: The ANSWER (GROUP BY)

2002-12-19 Thread Seay, Paul
This was what I finally provided to the person asking the question.

Try this as a define script input through the browser.

select summary.entity as ""NODE NAME"", nodes.domain_name as ""DOMAIN"",
nodes.platform_name as ""PLATFORM"", cast((cast(sum(summary.bytes) as float)
/ 1024 / 1024) as decimal(10,2)) as MBYTES , count(*) as ""CONNECTIONS"",
cast (client_version as char(1)) || '.' || cast (client_release as char(1))
|| '.' || cast (client_level as char(1)) || '.' || trim(cast(client_sublevel
as
char(2))) as ""Level"" from summary ,nodes where
summary.entity=nodes.node_name and summary.activity='BACKUP' and start_time
>current_timestamp - 1 day group by entity, domain_name, platform_name,
client_version, client_release, client_level, client_sublevel order by
MBytes desc

Make sure when you paste it in the window that you take all the returns out.
There should be only one line when you get done.

It worked for me.

If you do a q script f=d this is what you will see if it is right:

tsm: TSMPRD00>q script summary_profile f=d

  Name: SUMMARY_PROFILE
   Line Number: Description
   Command: Summary of backups and levels Last Update by
(administrator): PDS00
 Last Update Date/Time: 12/12/2002 17:14:57

  Name: SUMMARY_PROFILE
   Line Number: 1
   Command: select summary.entity as "NODE NAME",
 nodes.domain_name as "DOMAIN",
 nodes.platform_name as "PLATFORM",
 cast((cast(sum(summary.bytes) as float) /
1024
 / 1024) as decimal(10,2)) as MBYTES ,
count(*)
 as "CONNECTIONS", cast (client_version as
 char(1)) || '.' || cast (client_release as
 char(1)) || '.' || cast (client_level as
 char(1)) || '.' ||
trim(cast(client_sublevel
 as char(2))) as "Level" from summary ,nodes
 where summary.entity=nodes.node_name and
 summary.activity='BACKUP' and
 start_time>current_timestamp - 1 day group
by
 entity, domain_name, platform_name,
 client_version, client_release,
client_level,
 client_sublevel order by MBytes desc Last
Update by (administrator): PDS00
 Last Update Date/Time: 12/12/2002 17:14:57



Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 12, 2002 3:26 PM
To: Seay, Paul
Subject: RE: Select Statement: The ANSWER



Thanks, I will try the command line.





"Seay, Paul"


heon.com>cc:

 Subject: RE: Select Statement:
The ANSWER
12/12/2002

12:14 PM







OK,
Try pasting it into the command first and executing it.  Have you ever used
the admin GUI on Windows?  It really works well for stuff like this.

The problem is, in a define script all of the double quotes have to be keyed
twice.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 12, 2002 2:01 PM
To: Seay, Paul
Subject: RE: Select Statement: The ANSWER



Paul,

if I copy the select script into a define command script window with no
modifications, I get a result when I click on finish stating, page cannot be
displayed.  So I was wondering what if any modifications on the variables do
I have to do?  I am running Win 2k on TSM server, TSM client is 5.1.5.2.

Thanks,  Ron




"Seay, Paul"


heon.com>cc:

 Subject: RE: Select Statement:
The ANSWER
12/12/2002

10:43 AM







Describe what you mean by not getting anywhere.  Do you mean it never
responds, gets a return code of 11, return code of 3, or what?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 12, 2002 12:41 PM
To: [EMAIL PROTECTED]
Subject: Select Statement: The ANSWER



Paul,

I am trying to use this select statement but am not getting anywhere.  Is
there a varia

Re: TDP Exchange Failing

2002-12-16 Thread Seay, Paul
Try changing the buffers to 64K instead of the default. And, to a number of
4.  Everyone that I have suggested this change to and our own site have
found that the performance is good and the fragmentation issues with getting
1MB buffers is eliminated.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]]
Sent: Monday, December 16, 2002 10:45 AM
To: [EMAIL PROTECTED]
Subject: TDP Exchange Failing


hi all! running into a problem here.

TSM Server = AIX 4.3.3/TSM 4.2, 3494 Tape Library, 3590 type drives TDP
Client = WinNT 4.0, TSM Client 4.2, TDP Version

Getting this error in the Server Activity log:

12/15/02   22:01:25  ANE4993E (Session: 139502, Node: MAILJAXEX)  TDP
MSExchgV2
  NT ACN3502 TDP for Microsoft Exchange: full backup
of
  Information Store from server MAILJAX failed, rc =
418.

12/15/02   22:01:25  ANR2579E Schedule FUL_EXH_BACKUP_2200 in domain
ADT_NT for
  node MAILJAXEX failed (return code 402).


Got this message from the client log:

12/15/2002 22:01:25 ANS1309W (RC9)Requested data is offline
12/15/2002 22:01:25 Retrying failed backups...
12/15/2002 22:01:25 Total storage groups requested for backup:  1
12/15/2002 22:01:25 Total storage groups backed up: 0
12/15/2002 22:01:25 Total storage groups expired:   0
12/15/2002 22:01:25 Total storage groups excluded:  0
12/15/2002 22:01:25 Throughput rate:6,558.73
Kb/Sec
12/15/2002 22:01:25 Total bytes transferred:344,961,289
12/15/2002 22:01:25 Elapsed processing time:51.36 Secs


any idea whats going on here?



Re: 3494 libray lifespan??

2002-12-16 Thread Seay, Paul
I think it is more of a question when a VTS replacement would become
available and 3590 is significantly slower than other technologies.  There
is still development going on for the 3494 library itself.  There will be a
replacement 3590 drive soon.  It is not clear if that drive will be in the
3494 library or in a LTO type library.  My guess is both, but I have not
seen a product direction, so anything is possible.

At this point, I would review my data requirements, install new capacity in
LTO for open systems if it is more cost effective, and seek an answer from
IBM.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 13, 2002 1:16 PM
To: [EMAIL PROTECTED]
Subject: 3494 libray lifespan??


Does anybody have an idea what is reasonable to expect for the lifespan of a
3494 library?

My manager wants to know when we should budget for a new one. But I've never
heard of anybody replacing a 3494 because of age - just upgrading to new
drives!



Re: Backup of Tape Pool Still Running!

2002-12-14 Thread Seay, Paul
I have seen this on the 4.2.x.x server code levels.  What causes it to hang
is a storage pool related command by an ADMIN.  If you do a q sesions, you
should find a admin that has been running for a long time.  Cancel the admin
session and the problem will clear.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Nelson Kane [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 13, 2002 11:27 PM
To: [EMAIL PROTECTED]
Subject: Backup of Tape Pool Still Running!


Hello TSM'ers,
Did anyone come across this issue, we have a scheduled back that starts at
6am, and usually ends by 9. I just logged on to check on other processes and
noticed that the schedule is still running but the byte count is not
changing:
Primary Pool UNIX_TAPEPOOL, Copy Pool DRM_TAPE_-
   POOL, Files Backed Up: 2179, Bytes Backed Up:

   3,930,293,749, Unreadable Files: 0,
Unreadable

   Bytes: 0. Current Physical File (bytes):

   3,122,565,063

I issued a 'can pr' command and was prompted that the cancel is already
pending! Thus, I can't cancel it. I hate to just bounce the TSM server
without knowing what might have caused this error! any help is appreciated!
Thanks in advance! -kane



Re: multiple image backups

2002-12-12 Thread Seay, Paul
What have you set resourceutilization in the dsm.opt and mount point to for
the node?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: J D Gable [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 12, 2002 1:55 PM
To: [EMAIL PROTECTED]
Subject: multiple image backups


Hello all,

Does anyone know the best way (if any) to run multiple image backups from
one client at once?  We have 8 drives available to us, but we can only seem
to get one image (raw device) backup to kick off at a time. Meanwhile there
are 7 drives sitting idle.  Any assistance would be greatly appreciated.

Thanks,
Josh Gable
EDS



Re: K-cartridges besides J-cartridges

2002-12-12 Thread Seay, Paul
Is this kind of like the hyperjump they demonstrated at the end of the
movie, SuperNova?  Does the tape end up pregnant?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 12, 2002 7:35 AM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


and if you've ever had a notch tab fall out and get lost...in a pinch you
can (for a K tape) get the bottom one from a J tape and the top one from a
cleaning tape.  The colors will be all out of whack but the slots will be
proper :-)

Dwight



-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 11:47 PM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


And, if you look at a J and a K tape on the notches you will notice that
they have a different configuration.  So, if you put a K tape in a J only
drive, it will kick it right back out.  I thought that all B to E upgrade
kits were K capable.  I did not think that kit came out until the E upgrade
kit for K was available.  If your upgrade kits were installed after July
2001, they just about have to be K capable.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 9:16 AM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


Should see a "2X" sticker on the back of your 3590 drive if it is set to
deal with double length tapes. Also the internal tape spool is green (along
with one or two other internal parts, I seem to recall) so you might be able
to look in through the cooling holes in the top of the case to double check
(if you don't find a "2X" sticker on the drive)


Dwight



-Original Message-
From: Jim Sporer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 4:25 PM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


There is a difference between the J and K tapes and the 3590 drives need to
support the extended length tapes.  If the drives are new there is no
problem but older 3590E drives may need an upgrade to support the extended
length tapes.  Check with your CE. Jim Sporer


At 11:12 PM 12/10/2002 +0100, you wrote:
>Hello,
>
>AIX 5.1, TSM 4.2.1.19, IBM 3494 library, 3590E-drives
>
>At the moment we use about 900 J-cartridges in our library. Is it
>possible to checkin K-cartridges and use them besides the J-cartridges.
>
>During upgrade from our drives from 3590B to 3590E we changed status of
>our volumes to read-only because of different tracks used on tape.
>
>As far as I know is the only difference between J and K the length of
>the tape, so I guess there will be no problem.
>
>I'm wondering if someone is using both J and K cartridges and if there
>encounter any problems.
>
>Thanx,
>
>Brian.
>
>
>
>
>
>
>_
>Chatten met je online vrienden via MSN Messenger.
>http://messenger.msn.nl/



Re: Select Statement: The ANSWER

2002-12-11 Thread Seay, Paul
select summary.entity as "NODE NAME", nodes.domain_name as "DOMAIN",
nodes.platform_name as "PLATFORM", cast((cast(sum(summary.bytes) as float) /
1024 / 1024) as decimal(10,2)) as MBYTES , count(*) as "CONNECTIONS", cast
(client_version as char(1)) || '.' || cast (client_release as char(1)) ||
'.' || cast (client_level as char(1)) || '.' || trim(cast(client_sublevel as
char(2))) as "Level" from summary ,nodes where
summary.entity=nodes.node_name and summary.activity='BACKUP' and start_time
>current_timestamp - 1 day group by entity, domain_name, platform_name,
client_version, client_release, client_level, client_sublevel order by
MBytes desc

I had always wondered if this could be done.  I played with it until I
figured it out.  It was just a matter of looking at it from a group by point
of view.  The only problem is it does not fit across the screen.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 10:56 AM
To: [EMAIL PROTECTED]
Subject: Re: Select Statement


There are so many replies to this that look like contradictions to each
other that I'd like to look at this further on my system. I am not a SQL
guru by any means, I can't even seem to combine a couple of statements to
get what I need. Hopefully someone can help me(us) out here. I do an audit
license daily so this should come back with good, or in this case, bad
results.

When I try to combine these to get an output I always get a statement error.
I want to find out what levels are having the problem. I think it's
ridiculous that this does not work for "ALL" client levels. It always did in
the past and I can't understand why I have to now upgrade possibly 80 some
nodes to fix it. Why doesn't IBM fix their server code instead?

Could someone please combine these to get the necessary output? Client
level, node name, amount backed up and I think group by client level might
work.

Select for client level:

select node_name,cast (client_version as char(1)) || '.' || cast
(client_release as char(1)) || '.' || cast (client_level as char(1)) || '.'
|| trim(cast(client_sublevel as char(2))) as "Level" from nodes order by
|| 2,1


Select for amount backed up:

select summary.entity as "NODE NAME", nodes.domain_name as "DOMAIN",
nodes.platform_name as "PLATFORM", cast((cast(sum(summary.bytes) as float) /
1024 / 1024) as decimal(10,2)) as MBYTES , count(*) as "CONNECTIONS" from
summary ,nodes where summary.entity=nodes.node_name and
summary.activity='BACKUP' and start_time >current_timestamp - 1 day group by
entity, domain_name, platform_name order by MBytes desc


Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



> -Original Message-
> From: Michel Engels [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, December 11, 2002 6:22 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Select Statement
>
>
> When I do remember well this problem appeared when I installed TSM
> Clients 5.1.0. There was a maintenance (I do not remember if it was
> 5.1.2 or 5.1.3) that solved the problem. Be sure you do an "audit
> license" regularly. I did not install a V5.1.5 yet, so I do not know
> if the problem reappears in that version.
>
> Hope this helps,
>
> Michel Engels
> Software Consultant
> Devoteam Belgium
>
>
>
>
>
>
> "Gill, Geoffrey L." <[EMAIL PROTECTED]> on 12/10/2002 11:16:02
> PM
>
> Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>
> To:   [EMAIL PROTECTED]
> cc:(bcc: Michel Engels/BE/Devoteam)
>
> Subject:  Select Statement
>
>
>
> I got this somewhere and I know it used to work on my 4x server. Now
> that I'm on 5.1.5.2 I get a lot of nodes report back with 0 Megabytes
> but they obviously sent files.
>
> Anyone who can make it work since my SQL stills are pretty much non
> existant?
>
> /**/
> /* Specify a date on the run line as follows  */
> /* run q_backup 2001-09-30*/
> /**/
> select entity as node_name, date(start_time) as date, - cast(activity
> as varchar(10)) as activity, time(start_time)as start, -
> time(end_time) as end, cast(bytes/100 as decimal(6,0))as
> megabytes, - cast(affected as decimal(7,0)) as files, successful from
> summary - where date(start_time)='$1' and activity='BACKUP' order by
> megabytes desc
>
>  NODE_NAME: NODEA
>   DATE: 2002-12-09
>   ACTIVITY: BACKUP
>  START: 20:00:51
>END: 20:48:07
> MEGABYTES: 0
>  FILES: 100
> SUCCESSFUL: YES
>
>
>
> This one does the same thing with some nodes. It also only reports
> about half the nodes even though it looks like it's supposed to be
> going back a full day.
>
> NODEA   NT_DOM WinNT
>   0.00
> 1
> select summary.entity as "NODE NAME", nodes.domain_name as "DOMAIN",
> nodes.platform_name as "PLA

Re: Copypool management

2002-12-11 Thread Seay, Paul
The best way is to use two different nodes and two different management
classes/copygroups/pools.  This way your offsite node is the only one used
to manage that data.  We do some things very similar to this.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Roger Ward [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 05, 2002 11:35 AM
To: [EMAIL PROTECTED]
Subject: Copypool management


Multiple database backups to copypools?

How do I complete multiple database backups each day but only copy the most
current to offsite_pool for DR??



I did receive one good response to wanted to make sure that there were no
alternate ways of accomplishing this task???

Thanks, Roger



Re: Batch Command and security

2002-12-11 Thread Seay, Paul
You have to define them as a ADMIN with no grants.  They will get the query
commands with that by default.  That may be more than you want to give them.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Elio Vannelli [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 3:58 AM
To: [EMAIL PROTECTED]
Subject: Batch Command and security


Hi guys,

I have to build a batch command so a client can query activity log. I
thought to do it processing the "dsmadmc -id=name -password=password"
command. As username and password pass in clear text, I think it's better
having a specific user in TSM server and rights only to read activity log.
Which kind of user have I to set?

Thanks in advance

Elio



Re: LAN free backup for Novell

2002-12-11 Thread Seay, Paul
The reality is this.  File system backups of small files are not a good
candidate for SAN Tape backups.  Netware does not run any large file
databases, so there is no SAN Storage Agent requirement.  The answer is
Gigabit between your TSM server and the client and let the TSM server stream
the data to the tape drives.  You will get as go performance, if not better
than a SAN backup would ever provide.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 8:34 AM
To: [EMAIL PROTECTED]
Subject: LAN free backup for Novell


Hi, expets,

I'm trying to implement LAN free backup feature with TSM 4.2 on AIX 4.3.3
for client nodes with Novell servers. These NetWare servers are fibre
connected to EMC box through SAN switches. TSM support told me there is no
Storage Agent support for Netware platforms. Does anyone backup Novell
servers through Fibre Channel? What would be your suggestion to backup to
tape library that resides on SAN? Thanks again for your help!!





Jin Bae Chi (Gus)
System Admin/Tivoli
Data Center
614-287-5270
614-287-5488 Fax
[EMAIL PROTECTED]



Re: K-cartridges besides J-cartridges

2002-12-11 Thread Seay, Paul
And, if you look at a J and a K tape on the notches you will notice that
they have a different configuration.  So, if you put a K tape in a J only
drive, it will kick it right back out.  I thought that all B to E upgrade
kits were K capable.  I did not think that kit came out until the E upgrade
kit for K was available.  If your upgrade kits were installed after July
2001, they just about have to be K capable.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 9:16 AM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


Should see a "2X" sticker on the back of your 3590 drive if it is set to
deal with double length tapes. Also the internal tape spool is green (along
with one or two other internal parts, I seem to recall) so you might be able
to look in through the cooling holes in the top of the case to double check
(if you don't find a "2X" sticker on the drive)


Dwight



-Original Message-
From: Jim Sporer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 4:25 PM
To: [EMAIL PROTECTED]
Subject: Re: K-cartridges besides J-cartridges


There is a difference between the J and K tapes and the 3590 drives need to
support the extended length tapes.  If the drives are new there is no
problem but older 3590E drives may need an upgrade to support the extended
length tapes.  Check with your CE. Jim Sporer


At 11:12 PM 12/10/2002 +0100, you wrote:
>Hello,
>
>AIX 5.1, TSM 4.2.1.19, IBM 3494 library, 3590E-drives
>
>At the moment we use about 900 J-cartridges in our library. Is it
>possible to checkin K-cartridges and use them besides the J-cartridges.
>
>During upgrade from our drives from 3590B to 3590E we changed status of
>our volumes to read-only because of different tracks used on tape.
>
>As far as I know is the only difference between J and K the length of
>the tape, so I guess there will be no problem.
>
>I'm wondering if someone is using both J and K cartridges and if there
>encounter any problems.
>
>Thanx,
>
>Brian.
>
>
>
>
>
>
>_
>Chatten met je online vrienden via MSN Messenger.
>http://messenger.msn.nl/



Re: Market Share of TSM vs Legato Networker

2002-12-11 Thread Seay, Paul
Do not read into anything about market share statistics.  Suffice to say,
the only two products that get any significant press now are TSM and Veritas
NetBackup.  They have market share in around the 20 percentile.  A large
number of Backup Exec licenses exist and are continually sold, but that is a
very small environment solution and is Windows Only.  So, you can see how it
can claim installation in lots of sites.  Every company probably has 1 copy
of Backup Exec.

The real story is, what are your business requirements.  Forget the claims.

However, there is one document recently published on TSM and NetBackup.  It
wasn't a slam dunk of TSM over NetBackup, but it was pretty close.
Essentially, the report recognized the industry leading design (10 years
old) that TSM has for large environments.  You simply cannot perform full
backups anymore.  TSM's progressive backup and media management philosophy
is the answer for the forseeable future.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Rupp Thomas (Illwerke) [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 12:23 PM
To: [EMAIL PROTECTED]
Subject: Market Share of TSM vs Legato Networker


Hi TSM-ers,

is there a place where I can find information about the market share of TSM
and Legatos Networker in real busines environment (FORTUNE 500 etc.)? Tivoli
claims that 80% of the Fortune 500 companies use TSM.

Kind regards
Thomas Rupp
Vorarlberger Illwerke AG







--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG

--



Re: Management Class and TDP for R/3

2002-12-11 Thread Seay, Paul
We have setup something almost identical to what Tom has done.  This is an
excellent design based on our experiences as well.  Our retentions are
different, but we use the same concepts.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Kauffman, Tom [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 4:00 PM
To: [EMAIL PROTECTED]
Subject: Re: Management Class and TDP for R/3


Brian --

TDP/R3 does not use backup classes, it uses archive classes.

I currently have four SAP systems running, and have 7 management classes
defined.

First and foremost -- off-line redo log copies. I have two management
classes (PRDLOG1 and PRDLOG2) so I can have two seperate storage pool
chains, disk and tape. There's no point in doing two sets of logs if they
can end up on a common media somewhere. These two classes are used for all
SAP instance redo logs and have a 21-day retention (redo log retention
should match off-line backup retention).

PRDLOG1 ties to storage pool PRDLOG1 (disk) with next pool of PRDLOG1-LT
(lto tape). PRDLOG1-LT gets copied daily to PRDLOG1-LT-COPY for off-site
movement.

PRDLOG1 ties to storage pool PRDLOG2 (disk) with next pool of PRDLOG2-LT
(lto tape). PRDLOG2-LT gets copied daily to ARCH-LT-COPY for off-site
movement. (ARCH-LT-COPY also gets a copy of ARCH-LT which is the lto pool
for all non-SAP oracle archives).

Then my SAP data archive. The production instance uses management class
PRDSAP-ONLINE (online backups, 8 day retention) or PRDSAP-OFFLINE (21 day
retention -- run once per week). Both tie to tape storage pool PRDSAP-LT,
which copies daily to PRDSAP-LT-COPY for off-site rotation.

I do not run reclaims on any of the PRDSAP tape; I just let them die a
natural death after 8 or 21 days. I DO run reclaims on the redo log tapes
copies, and tend to have 4 to 6 lto tapes tied up in either class off-site.

My technical sandbox uses LOCALARCH as a management class. This ties to a
LOCALARCH disk pool with LOCAL-LT as next pool. 21 day retention, no
off-site copy (sandbox and test oracle databases, other archives of systems
that can be rebuilt from off-site copies of production environments).

My test/QA region uses TSTSAP as a mangement class, pointing to an LTO pool
TSTSAP-LT; 21 day retention, no off-site copy. We rebuild from the PRD
environment about every six weeks by cloning.

My DEV environment uses DEVSAP-LT as a management class, pointing to a disk
pool called ARCHIVEPOOL with next pool of ARCH-LT, 21 day retention (and
yes, it copies to ARCH-LT-COPY nightly).

In addition, I run weekly archives of the non-database SAP filesystems
(/sapmnt/SID, /oracle/SID, /usr/sap/trans, and a few local filesystems using
the PRDLOG1 and PRDLOG2 management classes. These are done to speed up
point-in-time recovery at D/R by laying down the most recent Sunday from the
archive and then restoring to 'now'.

SAP recommends you keep 3 generations of your off-line backup (archive); 21
days works if you do this weekly. If you run off-line monthly you'll want
90+ days and need to adjust redo log retention to match.

Is there anything else I can add to further confuse the issue? 

The mish-mosh of pools I use has been set up to optimise D/R recovery and
the off-site storage pool backup process. They work for us; we've had a
number of successful D/R drills over the years.

Tom Kauffman
NIBCO, Inc

> -Original Message-
> From: brian welsh [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, December 11, 2002 3:06 PM
> To: [EMAIL PROTECTED]
> Subject: Management Class and TDP for R/3
>
>
> Hello,
>
> Server AIX 5.1, TSM 4.2.2.8. Client AIX 4.3, TSM 4.2.1.25, and TDP for
> R/3 3.2.0.11 (Oracle)
>
> Question about TDP.
>
> We wanted to use the Management Class on the TSM-server like this:
> Versions Data ExistsNOLIMIT
> Versions Data Deleted   3
> Retain Extra Versions   21
> Retain Only Version 30
>
> In the Guide Tivoli Data Protection for R/3 Installation & User's
> Guide for Oracle is mentioned to define 4 different Management
> Classes. I was wondering how other people have defined there
> Management Classes. Support
> told me to use 4 different Management Classes because it's in
> the manual so
> they mention something with it (maybe for in the future), but
> what is the
> advantage of 4 different Management Classes. We have 8
> TDP-clients. The
> manual is saying every client his own 4 different Management
> Classes. It's
> not easy to manage that way.
>
> Besides you have to define an archive-copy group. We don't want to use
> Versioning in TDP, but use expiration via TSM-client/server. How many
> days I have to define the archive-copy group. I think it has to be
> 21 days. I guess
> that when using Versioning in TDP you have to set these
> parameter on .
> Am I right.
>
> Is there somebody who want to share the definitons used for TDP for
> Management Classes and Copy Groups?
>
> Thanx,
>
> Brian.
>
>
>

Re: TSM SERVER 5.1.5.2 Performance Issue

2002-12-11 Thread Seay, Paul
PQ68076 is MVS ONLY related to GETMAIN activity, not AIX and there is no
equivalent issue on AIX.  So, the problem is something else.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Dan Foster [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 1:16 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM SERVER 5.1.5.2


Hot Diggety! David Longo was rumored to have written:
> 5.1.5.2 has been out about a month or so now.
> Also that something else isn't broke?

There was a major performance bug fixed in 5.1.5.3; we had a really nasty
situation where a west coast client was sending data at only 2 Mbps to the
east coast server; raw network between the two is up to 32 Mbps as evidenced
by ttcp tests between west coast client and other east coast servers on the
same subnet.

Turns out we needed an efix for the pre-release version of fixed netinet
driver in AIX 5.1 (to be bos.net.tcp.client 5.1.0.37 and released next week)
to fix the OS perf bug *AND* to also upgrade to 5.1.5.3 from 5.1.5.2.

Doing so improved the ITSM network perf from 2 Mbps to 8 Mbps, and raw OS
network perf to 32 Mbps. Merely fixing the OS perf bug wasn't sufficient for
us. (We tested and retested between every single step to rule out multiple
variables.)

>From the 5.1.5.3 README:

PQ68076 PERFORMANCE DEGRADATION AFTER UPGRADE TO 5.1.5.0, 5.1.5.1 OR

-Dan



Re: Client backup verification

2002-12-10 Thread Seay, Paul
Dwight, there is only one problem with this, SELECTIVE does not update these
fields in the database.  There was a long discussion about this.  The sum of
it all was these fields are to support incrbydate.  However, I do the same
thing you do because we do not use selective backups.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 2:04 PM
To: [EMAIL PROTECTED]
Subject: Re: Client backup verification


A lot of people will say a lot of different things in response to your
question BUT...

If you schedule a nightly incremental backup, (annd if you retain event
status long enough) you can do things like
q event * * begind=-7 endd=today
from a "dsmadmc" admin session and see the results (as reported by the
clients) for all the scheduled events over the last week.  A "missed" is a
big problem but often times a "failed" can be OK, dependes on what exactly
failed... (file might have changed during backup and/or deleted between when
tsm built its list of files to backup and actually got around to backing up
that specific file...)

BUT I like:

select node_name,filespace_name as "Filespace Name
",cast((backup_end) as varchar(10)) as "Date" from adsm.filespaces where
(cast((current_timestamp-backup_end)day as decimal(18,0))>3 )

The above query command, if issued from a "dsmadmc" session will show you
each file space for each client that hasn't ~backed up~ in the last 3 days
(may adjust the # ov days by that last ...>3)

Now what does this really tell you ?
It points out file spaces/systems that have been removed from a client.
(tsm won't EVER automatically purge that data, you must with a ~del
file  ~ command) It points out a lot of things that might
not be pointed out in other places...
like when, for some odd reason, all other things report successes
YET there is still some sort of failure ???

just my 2 cents worth...

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Spearman, Wayne [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 12:51 PM
To: [EMAIL PROTECTED]
Subject: Client backup verification


Hi all,
New to TSM. Would like to know how others are verifying client backups are
ending normally. Running on AIX. Please share macros,command, and/or
scripts.

Thanks in advance

Wayne



This message and any included attachments are from NOVANT HEALTH INC. and
are intended only for the addressee(s). The information contained herein may
include trade secrets or privileged or otherwise confidential information.
Unauthorized review, forwarding, printing, copying, distributing, or using
such information is strictly prohibited and may be unlawful. If you received
this message in error, or have reason to believe you are not authorized to
receive it, please promptly delete this message and notify the sender by
e-mail.

Thank you.



Re: DB Backup in v5 TSM (5.1.1.6, 5.1.5.2, 5.1.5.4)

2002-12-10 Thread Seay, Paul
My database saves at about 22,000,000 pages an hour.  I have taken the
messages from the log and they are consistently the same.  I do not know
what is causing this.  There were some performance problems with mirrored
databases, but I thought they were fixed at your level.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Andy Carlson [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 2:43 PM
To: [EMAIL PROTECTED]
Subject: DB Backup in v5 TSM (5.1.1.6, 5.1.5.2, 5.1.5.4)


I have a strange problem that I have been working with IBM on, but was told
it's "working as designed".  I have a new TSM server, which started life as
5.1.1.6, upgraded to 5.1.5.2 at IBM's request, then just did 5.1.5.4 today
after seeing it had been released.  I back up the database, which is only
124000 pages since it is a new server.  It backs up the first 9 pages in
a couple of minutes.  The last 34000 pages take over an hour.  While this is
happening, there is very little CPU and I/O going on.  I have tried the
database on SCSI, and on FastT700, the backup file on each, etc.  I can make
it go away temporarily if I unload and reload, but once this database gets
bigger, that will not be an option.  Any ideas?  Thanks.


Andy Carlson|\  _,,,---,,_
Senior Technical Specialist   ZZZzz /,`.-'`'-.  ;-;;,_
BJC Health Care|,4-  ) )-,_. ,\ (  `'-'
St. Louis, Missouri   '---''(_/--'  `-'\_)
Cat Pics: http://andyc.dyndns.org/animal.html



Re: Select Statement

2002-12-10 Thread Seay, Paul
As it turns out the clients also have to be updated to correct a problem
where the summary table statistics are missing.  I talked to a TSM Level 2
person about this just last week.  I do not know what levels you have to be
at to get the problem corrected, but I thought 5.1.5 was good enough.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 5:16 PM
To: [EMAIL PROTECTED]
Subject: Select Statement


I got this somewhere and I know it used to work on my 4x server. Now that
I'm on 5.1.5.2 I get a lot of nodes report back with 0 Megabytes but they
obviously sent files.

Anyone who can make it work since my SQL stills are pretty much non
existant?

/**/
/* Specify a date on the run line as follows  */
/* run q_backup 2001-09-30*/
/**/
select entity as node_name, date(start_time) as date, - cast(activity as
varchar(10)) as activity, time(start_time)as start, -
time(end_time) as end, cast(bytes/100 as decimal(6,0))as megabytes, -
cast(affected as decimal(7,0)) as files, successful from summary - where
date(start_time)='$1' and activity='BACKUP' order by megabytes desc

 NODE_NAME: NODEA
  DATE: 2002-12-09
  ACTIVITY: BACKUP
 START: 20:00:51
   END: 20:48:07
MEGABYTES: 0
 FILES: 100
SUCCESSFUL: YES



This one does the same thing with some nodes. It also only reports about
half the nodes even though it looks like it's supposed to be going back a
full day.

NODEA   NT_DOM WinNT0.00
1
select summary.entity as "NODE NAME", nodes.domain_name as "DOMAIN",
nodes.platform_name as "PLATFORM", cast((cast(sum(summary.bytes) as float) /
1024 / 1024) as decimal(10,2)) as MBYTES , count(*) as "CONECTIONS" from
summary ,nodes where summary.entity=nodes.node_name and
summary.activity='BACKUP' and start_time >current_timestamp - 1 day group by
entity, domain_name, platform_name order by MBytes desc

A better one would also work.
Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail: [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Re: Help on TDP for SAP R/3 Oracle

2002-12-10 Thread Seay, Paul
Many do RL compression in the TDP.  Some do client compression.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: brian welsh [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 5:19 PM
To: [EMAIL PROTECTED]
Subject: Help on TDP for SAP R/3 Oracle


Hello,

Server AIX 5.1, TSM 4.2.2.8. Client AIX 4.3, TSM 4.2.1.35, and TDP 3.2.0.11

Question about compression.

We forced compression for this client at server-side, by setting option
Compression at client-node options to server. In client-option file there is
no compression-option.

We thought that this will be enough to compress also TDP -backup and archive
data. We checked and there is no compression. Tivoli Support say first that
TDP is compressing the files always, it a feature of TDP. And later they say
we have to activate this in TDP on the client.

I'm wondering how other sites set compression in this situation. On the
server, in TDP, on the client, ...

Thanx,

Brian.





_
Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/



Re: TDP for SQL Server Transaction Log Restores

2002-12-04 Thread Seay, Paul
OK, I see what you mean.

I guess I misunderstood what the product is probably doing.  The company
that has this auditing product is LUMIGENT, www.lumigent.com.  The product
is Entegra.  I have not talked to their developers as yet, but they seem to
be interested in retrieving the objects from TSM through the API.

We have just implemented TDPSQLC for logs and it works great.  This wrinkle
though is driving us back to using log intermediate files as you have
described and backing them up.

Would it be feasible for a tool like Entegra to read the data back throught
the API, simulating the TDP for SQL Server?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Del Hoobler [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 04, 2002 10:49 AM
To: [EMAIL PROTECTED]
Subject: Re: TDP for SQL Server Transaction Log Restores


Paul,

I am not sure what you mean exactly.
The SQL "logging" process does not have
separate "log" segment files.
The "log" that TDP for SQL backs up is just
a "stream" of data that only SQL knows what to do with.
It is not like a log file for Exchange or Domino.

There is no way to just restore a "log" to a
"file" with DP for SQL. You can only restore it
to a live SQL server. The SQL server controls
where that stream of data is written to.

You might want to take a closer look at the
requirement to find out how they are asking
that this be done and in what fashion.

Thanks,

Del



Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

- Never cut what can be untied.
- Commit yourself to constant improvement.



> We have a requirement to restore the transaction logs back to their
> disk location either on the orginal or an alternate server or to be
> able to
post
> process the transaction logs as disk files some kind of way before
> they
get
> deleted after a TDP for SQL Server Backup.
>
> My DBAs tell me SQL server deletes the transaction logs immediately
> after
a
> backup which presents a problem for a third party utility we want to
> do auditing with.



  1   2   3   4   5   6   7   8   9   10   >