Re: tape prob

2004-11-19 Thread John Naylor
Hashim, Shukrie asked how to move data from volume 002081 which had
multiple read errors without impact on the database

Well you can try cleaning the drive, you can try different drives with
move data,  but if there are real bad sectors on the tape then you will
only be able to get any remaining good data off the tape

To do that  try

AUDIT VOLUME  002081 FIX=YES

then

MOVE DATA  002081

If the volume is readable at all the audit should mark the unreadable
files as damaged and the  move data should skip the damaged files and move
the good data.
Then to get the rest of the data follow the advice you have aleady been
given.


**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy Group.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission. Unless specifically stated otherwise, this email (or 
any attachments to it) is not an offer capable of acceptance or acceptance of 
an offer and it does not form part of a binding contractual agreement.


Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**


recovery log utilization is too high

2004-11-19 Thread Roland Scharlau
 Hi all,

since three days i get following message via :

Server name: DESAETSM1, platform: Windows2000, version: 5.2.3.4,
date/time: 11/19/2004 11:30:54

Issues and Recommendations
--

The max recovery log utilization is too high. Condition (100.0  90)
Recommendation: Ensure that TSM database backups are working properly.

With extend log 1000 on commandline i get :
ANR2447E EXTEND LOG: Insufficient space to extend recovery log by
requested amount.

Many thanks for any help.

Roland Scharlau


 Mail is virus-checked by Symantec Mail Security 

Re: recovery log utilization is too high

2004-11-19 Thread goc
hi,
i think you have to add another rec log volume or issue q log to see how
large it is , and that issue extend log with smaller size
goran
- Original Message -
From: Roland Scharlau [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, November 19, 2004 11:39 AM
Subject: recovery log utilization is too high

Hi all,
since three days i get following message via :
Server name: DESAETSM1, platform: Windows2000, version: 5.2.3.4,
date/time: 11/19/2004 11:30:54
Issues and Recommendations
--
The max recovery log utilization is too high. Condition (100.0  90)
Recommendation: Ensure that TSM database backups are working properly.
With extend log 1000 on commandline i get :
ANR2447E EXTEND LOG: Insufficient space to extend recovery log by
requested amount.
Many thanks for any help.
Roland Scharlau

Mail is virus-checked by Symantec Mail Security 


Domino Transaction files

2004-11-19 Thread Bill Dourado
Hi ,

I have 59 transaction files in the Transaction Log directory that have
creation dates
ranging from 04-Nov-2004 to 17-Nov-2004.

It's an unusually high amount.

Transaction Log Archiving takes place every 4 hours, without any obvious
error.

I am aware that the Domino Administrator restored a couple of mail files
and
activated them applying logs yesterday and the day before.

What happens to restored transaction files  after activation  and logs are
applied
to mail files ?

Are these 59 still around due to some error of some kind ?

Can I safely delete them ?

Domino Server 6.5 on Windows 2003
TDP for Domino 5.1.5.01
TSM Backup/Archive client 5.2.3.1

TSM server 5.2.2.0 for Windows.

 T.I.A

Bill


AW: recovery log utilization is too high

2004-11-19 Thread Roland Scharlau
the result of q log:

Available Assigned   Maximum   MaximumPage Total  Used   Pct  Max.
Space Capacity Extension ReductionSizeUsable Pages  Util   
Pct
 (MB) (MB)  (MB)  (MB) (bytes) Pages  Util
-  - - --- - - - -
2,0001,500   500  1,492   4,096   383,488   339 
   0.1  100.1 

I made another log volume of 500 MB and hope this will work.

thanks goran!

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von goc
Gesendet: Freitag, 19. November 2004 11:55
An: [EMAIL PROTECTED]
Betreff: Re: recovery log utilization is too high

hi,

i think you have to add another rec log volume or issue q log to see how large 
it is , and that issue extend log with smaller size

goran


- Original Message -
From: Roland Scharlau [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, November 19, 2004 11:39 AM
Subject: recovery log utilization is too high


 Hi all,

 since three days i get following message via :

 Server name: DESAETSM1, platform: Windows2000, version: 5.2.3.4,
 date/time: 11/19/2004 11:30:54

 Issues and Recommendations
 --

 The max recovery log utilization is too high. Condition (100.0  90)
 Recommendation: Ensure that TSM database backups are working properly.

 With extend log 1000 on commandline i get :
 ANR2447E EXTEND LOG: Insufficient space to extend recovery log by 
 requested amount.

 Many thanks for any help.

 Roland Scharlau


 Mail is virus-checked by Symantec Mail Security 


 Mail is virus-checked by Symantec Mail Security 

Re: tape prob

2004-11-19 Thread Richard Sims
On Nov 18, 2004, at 9:38 PM, Hashim, Shukrie BSP-ISM/116 wrote:
...Obviously there's something wrong with this tape ... with the read
error (495) can anyone tell me a way .. for me to copy or move the
data from this tape ... to another .. tape ...
Rather than looking at an error count value from Query Volume output,
I'd
recommend looking into the ANR message(s) in the Activity Log which
point
out the specific problems with the tape.  Also check your OS error log
for its indications of problems with the volume.
If it's a media error, the most you can do is try Move Data a few times,
on different, clean drives: many of us have done that with difficult
tapes, and gotten a good amount of data off before embarking upon the
long-running Restore Volume.
Your count of read errors on that 3590(?) tape seems unusually high.
Make sure that your 3494 has not run out of cleaning tapes.  It's a
good idea to periodically inspect the inside of your 3494 to look for
dust/dirt accumulation (as I found one time as BG decided it would
be a good idea to sand new drywall in the computer room).
Richard Sims


Re: TDP for Oracle

2004-11-19 Thread Loon, E.J. van - SPLXM
Hi Rainer!
TDP 5.2 supports Oracle 9.2.0 and higher and I know 10g is supported as of
2nd quarter 2004. But I don't see any TDP (for Oracle) clients for Linux
platforms.
The only Linux Data Protection client I have seen is scheduled for next year
and it will only support Linux390 with Oracle...
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Rainer Holzinger [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 18, 2004 17:39
To: [EMAIL PROTECTED]
Subject: TDP for Oracle







Hi all,

in  my  office  there will be installed Oracle DB Server Enterprise Edition
10g running under SuSe Linux Enterprise Server v.8.
Is  there a TDP for Oracle available to backup 'Oracle DB Server Enterprise
Edition 10g'?
At  the  moment  there  are  a  few TDPO (vers. 5.2.0.0) installations with
Oracle  8i  and  9i  under  HPUX  11i. From the README files I haven't seen
support for 'Oracle DB Server Enterprise Edition 10g'.  Is there a new TDPO
planned/announced/available to support this Oracle version?

Thank you,
Rainer

-
Mit freundlichen Grüßen / Best regards / Terveisin

   (Embedded  Rainer Holzinger
  image moved Solution Specialist
   to file:   Backup / Recovery  Storage
 pic20601.gif)Management
  UPM-Kymmene Corporation
  IT and e-Business
  Georg-Haindl-Str. 5, D-86153
  Augsburg
  Tel. +49 821 3109 590
  Fax. +49 821 3109 115
  Mobile +49 170 4037 616
  [EMAIL PROTECTED]




**
For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), 
its subsidiaries and/or its employees shall not be liable for the incorrect or 
incomplete transmission of this e-mail or any attachments, nor responsible for 
any delay in receipt.
**


unsubscribe

2004-11-19 Thread Jason Couto
***

This e-mail may be privileged and/or confidential, and the sender does not 
waive any related rights and obligations. Any distribution, use or copying of 
this e-mail or the information it contains by other than an intended recipient 
is unauthorized. If you received this e-mail in error, please advise me (by 
return e-mail or otherwise) immediately.

Ce courriel est confidentiel et protege. L'expediteur ne renonce pas aux droits 
et obligations qui s'y rapportent. Toute diffusion, utilisation ou copie de ce 
message ou des renseignements qu'il contient par une personne autre que le 
(les) destinataire(s) designe(s) est interdite. Si vous recevez ce courriel par 
erreur, veuillez m'en aviser immediatement, par retour de courriel ou par un 
autre moyen.


Antwort: Re: TDP for Oracle

2004-11-19 Thread Rainer Holzinger





Hi Eric,

thank you for your response about my question.
I'm having a CD here 'ITSM for Databases - Data Protection for Oracle
Version 5.2'.
IBM part number is LCD4-4086-04.
There is TDPO for AIX, HPUX, Windows and Linux86 on it.
I have attached the TDPO for Linux86 README file for you.


(See attached file: README.TDPO)


best regards, Rainer



   
  Loon, E.J. van  
  - SPLXM An:  [EMAIL PROTECTED]
  [EMAIL PROTECTED] Kopie:  
  LM.COM  Thema:   Re: TDP for Oracle
  Gesendet von:
  ADSM: Dist Stor 
  Manager 
  [EMAIL PROTECTED] 
  T.EDU   
   
   
  19.11.2004 14:19 
  Bitte antworten  
  an ADSM: Dist   
  Stor Manager
   
   




Hi Rainer!
TDP 5.2 supports Oracle 9.2.0 and higher and I know 10g is supported as of
2nd quarter 2004. But I don't see any TDP (for Oracle) clients for Linux
platforms.
The only Linux Data Protection client I have seen is scheduled for next
year
and it will only support Linux390 with Oracle...
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Rainer Holzinger [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 18, 2004 17:39
To: [EMAIL PROTECTED]
Subject: TDP for Oracle







Hi all,

in  my  office  there will be installed Oracle DB Server Enterprise Edition
10g running under SuSe Linux Enterprise Server v.8.
Is  there a TDP for Oracle available to backup 'Oracle DB Server Enterprise
Edition 10g'?
At  the  moment  there  are  a  few TDPO (vers. 5.2.0.0) installations with
Oracle  8i  and  9i  under  HPUX  11i. From the README files I haven't seen
support for 'Oracle DB Server Enterprise Edition 10g'.  Is there a new TDPO
planned/announced/available to support this Oracle version?

Thank you,
Rainer

-
Mit freundlichen Grüßen / Best regards / Terveisin

   (Embedded  Rainer Holzinger
  image moved Solution Specialist
   to file:   Backup / Recovery  Storage
 pic20601.gif)Management
  UPM-Kymmene Corporation
  IT and e-Business
  Georg-Haindl-Str. 5, D-86153
  Augsburg
  Tel. +49 821 3109 590
  Fax. +49 821 3109 115
  Mobile +49 170 4037 616
  [EMAIL PROTECTED]




**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment
may be disclosed, copied or distributed, and that any other action related
to this e-mail or attachment is strictly prohibited, and may be unlawful.
If you have received this e-mail by error, please notify the sender
immediately by return e-mail, and delete this message. Koninklijke
Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees
shall not be liable for the incorrect or incomplete transmission of this
e-mail or any attachments, nor responsible for any delay in receipt.
**




README.TDPO
Description: Binary data


Re: TDP for Oracle

2004-11-19 Thread Jurjen Oskam
On Fri, Nov 19, 2004 at 02:19:15PM +0100, Loon, E.J. van - SPLXM wrote:

 2nd quarter 2004. But I don't see any TDP (for Oracle) clients for Linux
 platforms.

It does exist, we downloaded ours from the Passport Advantage-site. It's
in the same archive as the other platforms.

--
Jurjen Oskam


Re: linux restore problem

2004-11-19 Thread Otto Schakenbos
Richard, Mark, thnx for your answers.
I tried what you both suggested.
restore /daten/ -subdir=yes and restore '/daten/' -subdir=yes
all with the same result. (filespace not found)
What I also tried is copy the data (the orginal server is still up)
using nfs to the new server and back it up from the new server using a
new nodename.
If i try to restore now it works just fine.
I inspected the orginal directory and couldent't really find anything
weird. The only thing I can think of is that there is a directory
(/daten/public) which is a nfs mount to another server but this has
never been a problem in the past.
Mark, you are right that there are more then 95 dirs, just to let you know.
regards
Otto Schakenbos
System Administrator
TEL: +49-7151/502 8468
FAX: +49-7151/502 8489
MOBILE: +49-172/7102715
E-MAIL: [EMAIL PROTECTED]
TFX IT-Service AG
Fronackerstrasse 33-35
71332 Waiblingen
GERMANY

Mark D. Rodriguez wrote:
Otto,
I am going to assume you are trying to restore all files in all
subdirectories from /daten.  Then all you need do is drop the * from
your filespec, like so
dsmc restore /daten/  -subdir=yes 
*BeginSpeculation:*
I believe the reason you are having a problem is that the shell is
expanding the /daten/* and you are winding up with a parameter list that
has to many characters.  Therefore the shell truncates the command
causing an error on the last filespec in the list since it is probably
truncated i the middle someplace.   It restores 95 objects with no data,
which means its probably subdirectories.  Also, I would guess that you
have more than 95 subdirectories under /daten.   When it tries to
restore the 96th subdirectory it fails since that was the one that was
truncated.
*EndSpeculation:
*
Without being on the system to do some more investigation the comments
above are just speculation, however the proper syntax to accomplish what
I assume you want is listed above and should solve your problem.
Good Luck and let us know how it goes.
--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.
===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===

Otto Schakenbos wrote:
I have the following problem with sles 9.0 and restoring files.
situation: We have a linux fileserver (redhat 8.0) that uses tsm client
5.1.5.0 to backup.
Now I installed a new file server (sles 9.0) with the latest client
(5.2.3.0).
I'm trying to restore data to a certain partition (daten) on the old
system this partition was 130GB in size and on the new system it is
430GB in size.
On both system the /daten is mounted on its own partition.
When I do a dsmc restore /daten/*  -subdir=yes   I get the following
error:
Restore function invoked.
** Unsuccessful **
ANS4000E Error processing '': file space does not exist
ANS1247I Waiting for files from the server...
then 95 directory's are restored
and then
Restore Processing Interrupted!! 

Total number of objects restored:95
Total number of objects failed:   1
Total number of bytes transferred:0  B
Data transfer time:0,00 sec
Network data transfer rate:0,00 KB/sec
Aggregate data transfer rate:  0,00 KB/sec
Elapsed processing time:   00:00:04
ANS4000E Error processing '': file space does not exist
strange thing is that when i specify my restore command more precise
like this
dsmc restore /daten/home/user1/dir1/* -subdir=yes 
then it works just fine, if I specify one directory less it gives me
the same error.
thing I tried (with same result)
- make the partition smaller (same size as the orginal one)
- restore to /tmp
- downgrade the client to the same level as the orginal server
What did work no problem is restoring on another redhat 8.0 box.
Is this a kernel 2.6. thing ? (I know it is not supported yet) or is
there something  else i'm missing here.
regards
--
Otto Schakenbos
System Administrator
TEL: +49-7151/502 8468
FAX: +49-7151/502 8489
MOBILE: +49-172/7102715
E-MAIL: [EMAIL PROTECTED]
TFX IT-Service AG
Fronackerstrasse 33-35
71332 Waiblingen
GERMANY
--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.
===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===


AW: linux restore problem

2004-11-19 Thread Thomas Rupp
If you do a dsmc query filespace on the linux client what results do
you get?

Thomas Rupp


Re: select node summary

2004-11-19 Thread Andrew Raibeck
Off-hand, I am not sure why you don't see them. Try searching the table
with less restrictive criteria and see what entries exist. For example,

   select * from summary where entity='BLAH'

where BLAH is the name of one of your TDP nodes.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.



Joni Moyer [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
11/18/2004 14:00
Please respond to
ADSM: Dist Stor Manager


To
[EMAIL PROTECTED]
cc

Subject
Re: select node summary






Thanks Andy!  It worked like a charm!  I do not mean to sound ignorant,
but
why do I not see the TDP backups for these servers within this summary?
They are TDP for domino backups, but I guess I just assumed they would be
included within this table?  Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]




 Andrew Raibeck
 [EMAIL PROTECTED]
 OMTo
 Sent by: ADSM:   [EMAIL PROTECTED]
 Dist Stor  cc
 Manager
 [EMAIL PROTECTED] Subject
 .EDU Re: select node summary


 11/18/2004 01:50
 PM


 Please respond to
 ADSM: Dist Stor
 Manager
 [EMAIL PROTECTED]
   .EDU






I think you are looking more for something like this:

select entity as node_name,
  date(start_time) as date,
  cast(activity as varchar(10)) as activity,
  time(start_time) as start,
  time(end_time) as end,
  cast((bytes/100) as decimal(6,0)) as megabytes,
  cast(affected as decimal(7,0)) as files successful
   from summary
   where date(start_time)='2004-11-12' and
  activity='BACKUP' and
  entity like 'LN%'
   order by entity

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 11/18/2004
10:52:40:

 Hello All!

 I believe that I am getting the syntax of a select statement wrong for
 summarizing the backup information summary for all nodes that begin with
 ln* from 11/12/2004 - 11/13/2004.  Here is what I have so far:

 select entity as node_name, date(start_date) as date, cast(activity as
 varchar(10)) as activity, time(start_time) as start, time(end_time) as
end,
 cast(bytes/100) as decimal(6,0)) as megabytes, cast(affected as
 decimal(7,0)) as files successful from summary where date=11/12/2004 and
 activity='BACKUP' and node_name='LN*' order by node_name

 Would anyone happen to know what I am doing wrong?  I just can't seem to
 get this query to work.  Thank you in advance!

 
 Joni Moyer
 Highmark
 Storage Systems
 Work:(717)302-6603
 Fax:(717)302-5974
 [EMAIL PROTECTED]
 


select from actlog VS query actlog performance

2004-11-19 Thread Warren, Matthew (Retail)
Hello TSM'ers



I'm doing some scripting that is using actlog queries fairly heavily, I
have noticed that

Select * from actlog where cast(date_time as date)=current_date and
process=1234

Is a lot slower than

Q actlog begint=-08:00 se=1234 (say, its 8am in the morning...)


Although you need to be carefull you are actually getting what you want
with the latter version.


Is TSM doing anything internally to generate a SQL statement that works
quicker than mine but gives the same/similar result? - I am assuming
that internally TSM takes q actlog (and other q commands) and generates
a SQL statement it then processes against the TSM DB, formatting the
result to generate the query output as non-tables.


Thanks,

Matt.





___ Disclaimer Notice __
This message and any attachments are confidential and should only be read by 
those to whom they are addressed. If you are not the intended recipient, please 
contact us, delete the message from your computer and destroy any copies. Any 
distribution or copying without our prior permission is prohibited.

Internet communications are not always secure and therefore Powergen Retail 
Limited does not accept legal responsibility for this message. The recipient is 
responsible for verifying its authenticity before acting on the contents. Any 
views or opinions presented are solely those of the author and do not 
necessarily represent those of Powergen Retail Limited. 

Registered addresses:

Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry, CV4 
8LG.
Registered in England and Wales No: 3407430

Telephone +44 (0) 2476 42 4000
Fax +44 (0) 2476 42 5432


NDMP Backup Experiences

2004-11-19 Thread Curtis Stewart
Everyone,

I'm about to implement NDMP backups of our NetApp filer and am looking for
tips, gotchas etc...

Currently, we backup the filer using CIFS and a mount point on a Windows
2000 server. The performance of this method is painfully slow. It takes
about 22 hours to complete an incremental backup of the 3.5 million file
system over a 100MB full duplex network. We're hoping the backup and
restore performance will improve significantly with NDMP. I've read
through the Admin Guide a few times and our setup will indeed support NDMP
operations.

I plan to use the TSM server to run the library, and share it between the
NAS and regular TSM operations. The library is a STK L700 with LTO Gen2
Fibre Channel drives (8 of them). Overall the setup directions seem fairly
straight forward. Here are my questions.

1. Am I being overly optimistic? It doesn't seem like it will be that big
a deal.
2. Can I expect a significant backup and restore performance improvement
using NDMP?

Thanks in advance,

Curtis

[EMAIL PROTECTED]


Re: select from actlog VS query actlog performance

2004-11-19 Thread P Baines
Rather the other way round. The SQL is being converted to a native
database call. I would presume most query commands would be quicker than
their equivalent SQL queries.

For tuning SQL queries you can look at the indexing of the columns in a
table:

select tabname, colname, colno, index_keyseq, index_order from columns
where tabname='ACTLOG'

TABNAME  COLNAME   COLNO   INDEX_KEYSEQ
INDEX_ORDER
--   --   --   
---
ACTLOG   DATE_TIME 1  1   A

ACTLOG   MSGNO 2

ACTLOG   SEVERITY  3

ACTLOG   MESSAGE   4

ACTLOG   ORIGINATOR5

ACTLOG   NODENAME  6

ACTLOG   OWNERNAME 7

ACTLOG   SCHEDNAME 8

ACTLOG   DOMAINNAME9

ACTLOG   SESSID   10

ACTLOG   SERVERNAME   11

Here you can see it is only indexed on DATE_TIME. Other tables have more
indexed columns. Running functions on where clause columns may well
cause the query to do a full table scan anyway (not using the index.)
but that's just a guess.

(I notice that you are using process=1234 in your where clause, so maybe
you have a later release of TSM, I'm on 5.1 and don't have that column!)

Remember as well that SQL queries use the free space in your database,
so make sure you have plenty if you're doing big queries.

Paul. 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Warren, Matthew (Retail)
Sent: Friday 19 November 2004 15:55
To: [EMAIL PROTECTED]
Subject: select from actlog VS query actlog performance


Hello TSM'ers



I'm doing some scripting that is using actlog queries fairly heavily, I
have noticed that

Select * from actlog where cast(date_time as date)=current_date and
process=1234

Is a lot slower than

Q actlog begint=-08:00 se=1234 (say, its 8am in the morning...)


Although you need to be carefull you are actually getting what you want
with the latter version.


Is TSM doing anything internally to generate a SQL statement that works
quicker than mine but gives the same/similar result? - I am assuming
that internally TSM takes q actlog (and other q commands) and generates
a SQL statement it then processes against the TSM DB, formatting the
result to generate the query output as non-tables.


Thanks,

Matt.





___ Disclaimer Notice __
This message and any attachments are confidential and should only be
read by those to whom they are addressed. If you are not the intended
recipient, please contact us, delete the message from your computer and
destroy any copies. Any distribution or copying without our prior
permission is prohibited.

Internet communications are not always secure and therefore Powergen
Retail Limited does not accept legal responsibility for this message.
The recipient is responsible for verifying its authenticity before
acting on the contents. Any views or opinions presented are solely those
of the author and do not necessarily represent those of Powergen Retail
Limited. 

Registered addresses:

Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry,
CV4 8LG.
Registered in England and Wales No: 3407430

Telephone +44 (0) 2476 42 4000
Fax +44 (0) 2476 42 5432



Any e-mail message from the European Central Bank (ECB) is sent in good faith 
but shall neither be binding nor construed as constituting a commitment by the 
ECB except where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately 
via e-mail and delete this e-mail from your system.


Re: NDMP Backup Experiences

2004-11-19 Thread TSM_User
NDMP was much faster for total throughput for us when compared to a full backup.

Remember what is backed up via NDMP must be restored via NDMP.  So for you DR 
plan you need to ensure you have a NetApp device to restore to at your DR site.

Make sure you run some restore tests and see how you restore to an alternate 
NAS filer.  Remember you can pick alternate location and then browse to a 
windows share somewhere.

The incremental restore from a NDMP backup works as advertised.

The bests advice I can give is the most practical, make sure you run a bunch of 
backup and restore tests.

Curtis Stewart [EMAIL PROTECTED] wrote:
Everyone,

I'm about to implement NDMP backups of our NetApp filer and am looking for
tips, gotchas etc...

Currently, we backup the filer using CIFS and a mount point on a Windows
2000 server. The performance of this method is painfully slow. It takes
about 22 hours to complete an incremental backup of the 3.5 million file
system over a 100MB full duplex network. We're hoping the backup and
restore performance will improve significantly with NDMP. I've read
through the Admin Guide a few times and our setup will indeed support NDMP
operations.

I plan to use the TSM server to run the library, and share it between the
NAS and regular TSM operations. The library is a STK L700 with LTO Gen2
Fibre Channel drives (8 of them). Overall the setup directions seem fairly
straight forward. Here are my questions.

1. Am I being overly optimistic? It doesn't seem like it will be that big
a deal.
2. Can I expect a significant backup and restore performance improvement
using NDMP?

Thanks in advance,

Curtis

[EMAIL PROTECTED]


-
Do you Yahoo!?
 Meet the all-new My Yahoo!   Try it today!


Re: select from actlog VS query actlog performance

2004-11-19 Thread Andrew Raibeck
BEGINT=-08:00 starts searching the activity log as of 8 hours from the
present time (as opposed to the default, which is 1 hour from the present
time). Leave out the '-' if you really mean 08:00 (8:00 AM).

The raw TSM server database tables are not row-column format, but more
like a B-tree, and were not originally designed to support SQL. The only
methods for interrogating the database were those provided by the QUERY
commands (QUERY ACTLOG, QUERY FILESPACE, etc.). The QUERY commands are
optimized for accessing the raw tables of the TSM database, and thus
perform quite well.

Because customers wanted more query flexibility than what the TSM server
already provided, and creating individual QUERY commands for each possible
query was not practical (effectively an unbounded list), the SQL interface
was created. The tables presented by the SQL interface are not those of
the raw internal TSM tables; rather, they are virtualized versions of the
internal tables that are created dynamically when you run the SELECT
command (hence the need for available space in the database to run
SELECT). While SELECT gives you more flexibility in the types of queries
you can run, those queries tend to run more slowly.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 11/19/2004
07:55:02:

 Hello TSM'ers



 I'm doing some scripting that is using actlog queries fairly heavily, I
 have noticed that

 Select * from actlog where cast(date_time as date)=current_date and
 process=1234

 Is a lot slower than

 Q actlog begint=-08:00 se=1234 (say, its 8am in the morning...)


 Although you need to be carefull you are actually getting what you want
 with the latter version.


 Is TSM doing anything internally to generate a SQL statement that works
 quicker than mine but gives the same/similar result? - I am assuming
 that internally TSM takes q actlog (and other q commands) and generates
 a SQL statement it then processes against the TSM DB, formatting the
 result to generate the query output as non-tables.


 Thanks,

 Matt.





 ___ Disclaimer Notice __
 This message and any attachments are confidential and should only be
read by
 those to whom they are addressed. If you are not the intended recipient,

 please contact us, delete the message from your computer and destroy any

 copies. Any distribution or copying without our prior permission is
prohibited.

 Internet communications are not always secure and therefore Powergen
Retail
 Limited does not accept legal responsibility for this message. The
recipient
 is responsible for verifying its authenticity before acting on the
contents.
 Any views or opinions presented are solely those of the author and do
not
 necessarily represent those of Powergen Retail Limited.

 Registered addresses:

 Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry,
CV4 8LG.
 Registered in England and Wales No: 3407430

 Telephone +44 (0) 2476 42 4000
 Fax +44 (0) 2476 42 5432


TDP SQL - 'set backups' / out of sync TDP backups

2004-11-19 Thread David McClelland
Guys,

To begin, a familiar story for many of you I'm sure - we have a customer
who has MSSQL databases, and wants TDP backups going back a month or so.
Easy, bread and butter stuff. Happy with this, they want their backups
from every Friday to be retained for 4 weeks, and from every 4th Friday
to last for 7 years. As the TDP stores the SQL backups in a backup
copygroup, we don't have that level of flexibility easily built in to
the tool to provide what is essentially more of an 'archive' type
request. 

The normal response at this point is to 'use different node names -
ABC123_WEEKLY or ABC123_MONTHLY', and this is indeed what I have done
frequently before. However, I can't help feeling this isn't quite
perfect, having to make our non TSM savvy client (on a remote site) faff
around with different node names and explain to them why TSM has to be
handled in this way. It's not that big a deal really, but I'm aiming for
simplification here.

Now, my question is whether anyone is achieving the fulfilment of such
requirements in another way, for example using 'set backups'. According
to the docs, 

'set backups are intended to be used in unusual one-of-a-kind
situations [...] Because set backups are always uniquely named (like log
backups), they do not participate in expiration due to version limit
[...] The reason for using a set backup is if you do not want the backup
to be part of your normal expiration process.'

Sounds like a possibility - anyone using these already?

We've a similar requirement coming in for Informix backups - again, from
what I've seen of ONBAR so far, having out-of-sync weekly/monthly/yearly
backups could be a challenge when using the same node name. With Oracle
backups, we've managed to overcome this by customising the RMAN backup
piece tags, and expiring manually from RMAN based upon these to identify
logs, weekly and monthly backup pieces etc - works very smoothly indeed.

Your thoughts, especially on a Friday afternoon, are always much
appreciated :O)

Rgds,

David McClelland
Tivoli Storage Manager Certified Consultant 
Operations Backup and Recovery Projects 
Shared Infrastructure Development   
Reuters 
85 Fleet Street 
London EC4P 4AJ 




--- -
Visit our Internet site at http://www.reuters.com

Get closer to the financial markets with Reuters Messaging - for more
information and to register, visit http://www.reuters.com/messaging

Any views expressed in this message are those of  the  individual
sender,  except  where  the sender specifically states them to be
the views of Reuters Ltd.


Re: AW: linux restore problem

2004-11-19 Thread Otto Schakenbos
it tells me
tsm q files
Num Last Incr Date  TypeFile Space Name
--- --  ---
 1   19.11.2004 08:51:58   EXT3/
 2   19.11.2004 08:51:58   EXT3/boot
 3   19.11.2004 08:55:19   EXT3/daten
tsm
Otto Schakenbos
System Administrator
TEL: +49-7151/502 8468
FAX: +49-7151/502 8489
MOBILE: +49-172/7102715
E-MAIL: [EMAIL PROTECTED]
TFX IT-Service AG
Fronackerstrasse 33-35
71332 Waiblingen
GERMANY

Thomas Rupp wrote:
If you do a dsmc query filespace on the linux client what results do
you get?
Thomas Rupp



Re: NDMP Backup Experiences

2004-11-19 Thread Curtis Stewart
How are you managing the tapes? Since you can't use DRM to copy the
volumes, I assume you are just ejecting them and manually tracking the
tape retention/expiration.

[EMAIL PROTECTED]



TSM_User [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
11/19/2004 10:20 AM
Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc

Subject
Re: NDMP Backup Experiences






NDMP was much faster for total throughput for us when compared to a full
backup.

Remember what is backed up via NDMP must be restored via NDMP.  So for you
DR plan you need to ensure you have a NetApp device to restore to at your
DR site.

Make sure you run some restore tests and see how you restore to an
alternate NAS filer.  Remember you can pick alternate location and then
browse to a windows share somewhere.

The incremental restore from a NDMP backup works as advertised.

The bests advice I can give is the most practical, make sure you run a
bunch of backup and restore tests.

Curtis Stewart [EMAIL PROTECTED] wrote:
Everyone,

I'm about to implement NDMP backups of our NetApp filer and am looking for
tips, gotchas etc...

Currently, we backup the filer using CIFS and a mount point on a Windows
2000 server. The performance of this method is painfully slow. It takes
about 22 hours to complete an incremental backup of the 3.5 million file
system over a 100MB full duplex network. We're hoping the backup and
restore performance will improve significantly with NDMP. I've read
through the Admin Guide a few times and our setup will indeed support NDMP
operations.

I plan to use the TSM server to run the library, and share it between the
NAS and regular TSM operations. The library is a STK L700 with LTO Gen2
Fibre Channel drives (8 of them). Overall the setup directions seem fairly
straight forward. Here are my questions.

1. Am I being overly optimistic? It doesn't seem like it will be that big
a deal.
2. Can I expect a significant backup and restore performance improvement
using NDMP?

Thanks in advance,

Curtis

[EMAIL PROTECTED]


-
Do you Yahoo!?
 Meet the all-new My Yahoo!   Try it today!


FW: Win2K LANFREE

2004-11-19 Thread Thomas, Matthew
Folks,

OPENTEST environment
Server - TSM 5.2.3.0 on AIX 5.2
Client - Win2K with StorageAgent 5.2.3.0 and TDP for SQL 2.2
Library - 3494 ATL with two 3590 drives

We're currently testing our upgrade process to 5.2.3.x. I've succesfully
done an Oracle LANFree (TSM 5.2.3.0 on AIX 5.1) backup to our newly upgraded
TSM test server, so now we're attempting to test the Win2K SQL LANfree node.


TDPSQL backups are fine across the LAN, though when we set ENABLELANFREE
YES in dsmsta.opt, sessions are started on the server as expected and are
visible when the StorageAgent is interrogated (showing in MediaW state), but
the these mounts are never fulfilled.
The TSM server itself repeatedly mounts and dismounts the requested carts,
whilst a query of the storage agent shows the sessions in perpetual MediaW.
 Running 'q mount' on the StorageAgent shows the sessions as WAITING FOR
 VOLUME even though the tapes have been mounted by the TSM Server.
 The volumes are all fine - in read-write status.
Ultimately the LANFree backup fails with a SERVER MEDIA MOUNT NOT POSSIBLE
message.
Everything looks okay in the MMC - two tape devices are visible, the
expected WWNs and serial numbers are there too.

I'm stuck! Besides using the Wintel box for firewood, has anyone got any
ideas?


 11/19/04   12:24:30  ANR1404W (Session: 763, Origin:
 SQLTTLTSBDC1067_STA)
   Scratch volume mount request denied - mount
 failed.
   (SESSION: 763)
 11/19/04   12:24:30  ANR0514I Session 767 closed volume . (SESSION:
 767)
 11/19/04   12:24:30  ANR0409I Session 780 ended for server
 SQLTTLTSBDC1067_STA
   (Windows). (SESSION: 780)
 11/19/04   12:24:30  ANR0525W (Session: 763, Origin:
 SQLTTLTSBDC1067_STA)
   Transaction failed for session 7 for node
   TSM_SQLTTLTSBDC1067_SQL (TDP MSSQLV2 NT) -
 storage media
   inaccessible. (SESSION: 763)
 11/19/04   12:24:31  ANE4993E (Session: 766, Node:
 TSM_SQLTTLTSBDC1067_SQL)
   TDP MSSQLV2 NT ACO3002 TDP for Microsoft SQL
 Server: full
   backup of database OPENTEST from server
 SQLTTLTSBDC1067
   failed, rc = 418.  (SESSION: 766)
 11/19/04   12:24:31  ANE4991I (Session: 766, Node:
 TSM_SQLTTLTSBDC1067_SQL)
   TDP MSSQLV2 NT ACO3008 TDP for Microsoft SQL
 Server:
   Backup of server SQLTTLTSBDC1067 is complete.
 Total SQL
   backups selected:   1   Total SQL
 backups
   attempted:  1   Total SQL backups
 completed:
   0   Total SQL backups excluded:
   0   Total SQL backups inactivated:0
   Throughput rate:  5.61
 Kb/Sec
   Total bytes transferred:
 1,850,880
   Elapsed processing time:  322.01
 Secs
   (SESSION: 766)

q devc
 Device Class Name: 3494P4
Device Access Strategy: Sequential
Storage Pool Count: 1
   Device Type: 3590
Format: DRIVE
 Est/Max Capacity (MB):
   Mount Limit: DRIVES
  Mount Wait (min): 60
 Mount Retention (min): 2
  Label Prefix: ADSM
   Library: LIB3494P4
 Matt Thomas




---
This e-mail is intended only for the above addressee.  It may contain
privileged information. If you are not the addressee you must not copy,
distribute, disclose or use any of the information in it.  If you have
received it in error please delete it and immediately notify the sender.

evolvebank.com is a division of Lloyds TSB Bank plc.
Lloyds TSB Bank plc, 25 Gresham Street, London, EC2V 7HN.  Registered in
England, number 2065.  Telephone No: 020 7626 1500
Lloyds TSB Scotland plc, Henry Duncan House, 120 George Street,
Edinburgh EH2 4LH.  Registered in Scotland, number 95237.  Telephone
No: 0131 225 4555

Lloyds TSB Bank plc and Lloyds TSB Scotland plc are authorised and
regulated by the Financial Services Authority and represent only the
Scottish Widows and Lloyds TSB Marketing Group for life assurance,
pensions and investment business.

Signatories to the Banking Codes.
---


ksh here documents for sql cmds

2004-11-19 Thread Richard Rhodes
Be kind . . . . don't laugh tooo hard . . . .

I thought I'd share this.

I got tired coding dsmadmc cmd line sql cmds like the following in ksh
scripts . . .

  dsmadmc -id=admin -password=admin -tab \
   select entity, count\(\*\) \
from summary \
where cast\(\(current_timestamp-start_time\)hours as decimal\(8,0\)\)
\ 25 \
group by entity having count\(\*\)\1 \
order by 2 desc

The admin ref manual doesn't say anything about dsmadmc accepting input
from stdin, but I figured I'd try it anyway . . . it WORKED

dsmadmc -id=admin -password=admin -tab EOD
  select entity, count(*) \
  from summary \
  where cast((current_timestamp-start_time)hours as decimal(8,0))  25 \
  group by entity having count(*)1 \
  order by 2 desc
EOD

Now, doesn't that look much better?

ok . . you can stop laughing at me now . . .

Rick


-
The information contained in this message is intended only for the personal
and confidential use of the recipient(s) named above. If the reader of this
message is not the intended recipient or an agent responsible for
delivering it to the intended recipient, you are hereby notified that you
have received this document in error and that any review, dissemination,
distribution, or copying of this message is strictly prohibited. If you
have received this communication in error, please notify us immediately,
and delete the original message.


Re: NDMP Backup Experiences

2004-11-19 Thread TSM_User
After all our testing it was determined that we were better off using Windows 
servers instead of a NetApp Filer for windows data.  As a



Curtis Stewart [EMAIL PROTECTED] wrote:
How are you managing the tapes? Since you can't use DRM to copy the
volumes, I assume you are just ejecting them and manually tracking the
tape retention/expiration.

[EMAIL PROTECTED]



TSM_User
Sent by: ADSM: Dist Stor Manager
11/19/2004 10:20 AM
Please respond to
ADSM: Dist Stor Manager


To
[EMAIL PROTECTED]
cc

Subject
Re: NDMP Backup Experiences






NDMP was much faster for total throughput for us when compared to a full
backup.

Remember what is backed up via NDMP must be restored via NDMP. So for you
DR plan you need to ensure you have a NetApp device to restore to at your
DR site.

Make sure you run some restore tests and see how you restore to an
alternate NAS filer. Remember you can pick alternate location and then
browse to a windows share somewhere.

The incremental restore from a NDMP backup works as advertised.

The bests advice I can give is the most practical, make sure you run a
bunch of backup and restore tests.

Curtis Stewart wrote:
Everyone,

I'm about to implement NDMP backups of our NetApp filer and am looking for
tips, gotchas etc...

Currently, we backup the filer using CIFS and a mount point on a Windows
2000 server. The performance of this method is painfully slow. It takes
about 22 hours to complete an incremental backup of the 3.5 million file
system over a 100MB full duplex network. We're hoping the backup and
restore performance will improve significantly with NDMP. I've read
through the Admin Guide a few times and our setup will indeed support NDMP
operations.

I plan to use the TSM server to run the library, and share it between the
NAS and regular TSM operations. The library is a STK L700 with LTO Gen2
Fibre Channel drives (8 of them). Overall the setup directions seem fairly
straight forward. Here are my questions.

1. Am I being overly optimistic? It doesn't seem like it will be that big
a deal.
2. Can I expect a significant backup and restore performance improvement
using NDMP?

Thanks in advance,

Curtis

[EMAIL PROTECTED]


-
Do you Yahoo!?
Meet the all-new My Yahoo! Try it today!


-
Do you Yahoo!?
 Meet the all-new My Yahoo!   Try it today!


Re: NDMP Backup Experiences

2004-11-19 Thread TSM_User
After all our testing it was determined that we were better off using Windows 
servers instead of a NetApp Filer for windows data.



Curtis Stewart [EMAIL PROTECTED] wrote:
How are you managing the tapes? Since you can't use DRM to copy the
volumes, I assume you are just ejecting them and manually tracking the
tape retention/expiration.

[EMAIL PROTECTED]



TSM_User
Sent by: ADSM: Dist Stor Manager
11/19/2004 10:20 AM
Please respond to
ADSM: Dist Stor Manager


To
[EMAIL PROTECTED]
cc

Subject
Re: NDMP Backup Experiences






NDMP was much faster for total throughput for us when compared to a full
backup.

Remember what is backed up via NDMP must be restored via NDMP. So for you
DR plan you need to ensure you have a NetApp device to restore to at your
DR site.

Make sure you run some restore tests and see how you restore to an
alternate NAS filer. Remember you can pick alternate location and then
browse to a windows share somewhere.

The incremental restore from a NDMP backup works as advertised.

The bests advice I can give is the most practical, make sure you run a
bunch of backup and restore tests.

Curtis Stewart wrote:
Everyone,

I'm about to implement NDMP backups of our NetApp filer and am looking for
tips, gotchas etc...

Currently, we backup the filer using CIFS and a mount point on a Windows
2000 server. The performance of this method is painfully slow. It takes
about 22 hours to complete an incremental backup of the 3.5 million file
system over a 100MB full duplex network. We're hoping the backup and
restore performance will improve significantly with NDMP. I've read
through the Admin Guide a few times and our setup will indeed support NDMP
operations.

I plan to use the TSM server to run the library, and share it between the
NAS and regular TSM operations. The library is a STK L700 with LTO Gen2
Fibre Channel drives (8 of them). Overall the setup directions seem fairly
straight forward. Here are my questions.

1. Am I being overly optimistic? It doesn't seem like it will be that big
a deal.
2. Can I expect a significant backup and restore performance improvement
using NDMP?

Thanks in advance,

Curtis

[EMAIL PROTECTED]


-
Do you Yahoo!?
Meet the all-new My Yahoo! Try it today!

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


Re: NDMP Backup Experiences

2004-11-19 Thread TSM_User
After all our testing it was determined that we were better off using Windows 
servers instead of a NetApp Filer for windows data.  We did not complete our 
discussions on what we would do with the tapes themselves.




Curtis Stewart [EMAIL PROTECTED] wrote:
How are you managing the tapes? Since you can't use DRM to copy the
volumes, I assume you are just ejecting them and manually tracking the
tape retention/expiration.

[EMAIL PROTECTED]



TSM_User
Sent by: ADSM: Dist Stor Manager
11/19/2004 10:20 AM
Please respond to
ADSM: Dist Stor Manager


To
[EMAIL PROTECTED]
cc

Subject
Re: NDMP Backup Experiences






NDMP was much faster for total throughput for us when compared to a full
backup.

Remember what is backed up via NDMP must be restored via NDMP. So for you
DR plan you need to ensure you have a NetApp device to restore to at your
DR site.

Make sure you run some restore tests and see how you restore to an
alternate NAS filer. Remember you can pick alternate location and then
browse to a windows share somewhere.

The incremental restore from a NDMP backup works as advertised.

The bests advice I can give is the most practical, make sure you run a
bunch of backup and restore tests.

Curtis Stewart wrote:
Everyone,

I'm about to implement NDMP backups of our NetApp filer and am looking for
tips, gotchas etc...

Currently, we backup the filer using CIFS and a mount point on a Windows
2000 server. The performance of this method is painfully slow. It takes
about 22 hours to complete an incremental backup of the 3.5 million file
system over a 100MB full duplex network. We're hoping the backup and
restore performance will improve significantly with NDMP. I've read
through the Admin Guide a few times and our setup will indeed support NDMP
operations.

I plan to use the TSM server to run the library, and share it between the
NAS and regular TSM operations. The library is a STK L700 with LTO Gen2
Fibre Channel drives (8 of them). Overall the setup directions seem fairly
straight forward. Here are my questions.

1. Am I being overly optimistic? It doesn't seem like it will be that big
a deal.
2. Can I expect a significant backup and restore performance improvement
using NDMP?

Thanks in advance,

Curtis

[EMAIL PROTECTED]


-
Do you Yahoo!?
Meet the all-new My Yahoo! Try it today!


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


Tape Volume States

2004-11-19 Thread Rob Hefty
Hello all

How can a tape be listed in the volumes table as scratch and yet still
have a value in the percent utilized column larger than zero?  The tapes
are old IBM 3575 format and the server is 5.1.6.5 on AIX 5.1.

Thanks,
Rob


Windows PASSWORD and Multiple TSM servers

2004-11-19 Thread Zoltan Forray/AC/VCU
I have a question about how/where the TSM client stores the
PASSWORDACCESS GENERATE password on a W2K box.

If a W2K box access multiple TSM servers (e.g. by switching/updating the
DSM.OPT file), are the passwords to both TSM servers stored as seperate
registry keys so as to not step on each other ?

Or will this simply not work ?

My problem is this.

I need to move a node from one TSM server to another, via EXPORT NODE TO
SERVER. The data movement will take a while and this will be done during
the Thanksgiving holiday when everyone is gone, since I have to suspend
backups during the export.

I can remotely access and change the DSM.OPT file, and stop/start the
scheduler service (via the TSMManager Agent) but I would not be able to
answer the initial password prompt.

So, my thought was to connect to the new TSM server, right now, and set
the password, but go back to the old TSM server until next Wednesday.

Will this work or am I just blowing smoke ???


Re: Tape Volume States

2004-11-19 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Rob Hefty
How can a tape be listed in the volumes table as scratch and 
yet still have a value in the percent utilized column larger 
than zero?  The tapes are old IBM 3575 format and the server 
is 5.1.6.5 on AIX 5.1.

Details, please. Output of Q VOL VOLUMENAME F=D and Q LIBV
LIBRARYNAME VOLUMENAME F=D would be illuminating.

(A tape volume won't show up in the volume table as scratch; it would
show up in the libvolume table.)

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627  


Client behavior during image backup

2004-11-19 Thread Michael Short
We have a client that runs an AIX script to stop Oracle and Apache, do an
image backup of a file space, an incrmental of everything else, and then
restart Oracle and Apache. Everything usually works correctly except that
sometimes when Oracle starts up the file system done by the image backup is
R/O; i.e. another process prevented TSM from dismounting the file system
and mounting it back R/W.

The return code is still 0 and there are no error messages to indicate this
is happening. Is there any way to get TSM to report that it couldn't
remount the file system? The approach we are going to try is to check the
state of the file system before Oracle restarts and if necessary kill any
impeding processes and remount the file system.

Ideas?

Tia



NOTICE:  This communication may contain confidential, proprietary or
legally privileged information. It is intended only for the person(s) to
whom it is addressed.  If you are not an intended recipient, you may not
use, read, retransmit, disseminate or take any action in reliance upon it.
Please notify the sender that you have received it in error and immediately
delete the entire communication, including any attachments. Towers Perrin
does not encrypt and cannot ensure the confidentiality or integrity of
external e-mail communications and, therefore, cannot be responsible for
any unauthorized access, disclosure, use or tampering that may occur during
transmission.  This communication is not intended to create or modify any
obligation, contract or warranty of Towers Perrin, unless the firm clearly
expresses such an intent.


Re: Domino Transaction files

2004-11-19 Thread Eduardo Esteban
Domino deletes each individual transaction log extent file after it is
done
applying the transaction in it.  If Domino could not delete these files,
an error message was probably displayed by Domino's Recovery Manager.
I believe you can delete these files since Domino will not reuse these
files
when it needs additional transaction log files. However, you should
contact IBM support to find out if you can safely remove these files.

Eduardo

ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 11/19/2004
03:34:31 AM:

 Hi ,

 I have 59 transaction files in the Transaction Log directory that have
 creation dates
 ranging from 04-Nov-2004 to 17-Nov-2004.

 It's an unusually high amount.

 Transaction Log Archiving takes place every 4 hours, without any obvious
 error.

 I am aware that the Domino Administrator restored a couple of mail files
 and
 activated them applying logs yesterday and the day before.

 What happens to restored transaction files  after activation  and logs
are
 applied
 to mail files ?

 Are these 59 still around due to some error of some kind ?

 Can I safely delete them ?

 Domino Server 6.5 on Windows 2003
 TDP for Domino 5.1.5.01
 TSM Backup/Archive client 5.2.3.1

 TSM server 5.2.2.0 for Windows.

 T.I.A

 Bill


Re: Tape Volume States

2004-11-19 Thread Richard Sims
On Nov 19, 2004, at 1:31 PM, Rob Hefty wrote:
How can a tape be listed in the volumes table as scratch and yet still
have a value in the percent utilized column larger than zero? ...
The columns from Select output largely conform to the definitions
in the Query commands.  If you mean that the SCRATCH column
contains Yes, then it simply means that the volume came from the
scratch set of libvolumes and will return to that area when emptied.
This is in contrast to a volume assigned to a stgpool via
DEFine Volume, which stays in the stgpool unless DELete Volume is
done.
   Richard Sims


Re: linux restore problem

2004-11-19 Thread Stef Coene
On Friday 19 November 2004 15:15, Otto Schakenbos wrote:
 Richard, Mark, thnx for your answers.

 I tried what you both suggested.
 restore /daten/ -subdir=yes and restore '/daten/' -subdir=yes

 all with the same result. (filespace not found)

 What I also tried is copy the data (the orginal server is still up)
 using nfs to the new server and back it up from the new server using a
 new nodename.
 If i try to restore now it works just fine.

 I inspected the orginal directory and couldent't really find anything
 weird. The only thing I can think of is that there is a directory
 (/daten/public) which is a nfs mount to another server but this has
 never been a problem in the past.

 Mark, you are right that there are more then 95 dirs, just to let you know.
What is the output of
dsmc q filespaces

And can you try to restore it via the GUI ?

And what kind of file system are you using ?

Stef

-- 
[EMAIL PROTECTED]
 Using Linux as bandwidth manager
     http://www.docum.org/


Re: Windows PASSWORD and Multiple TSM servers

2004-11-19 Thread Andrew Raibeck
See my post of 16 November on the Error writing registry password thread
where I indicate the location of the registry password.

As long as the TSM servers don't have duplicate server names, you can use
the same node name to access more than one server without having encrypted
passwords stepping on each other.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 11/19/2004
11:32:06:

 I have a question about how/where the TSM client stores the
 PASSWORDACCESS GENERATE password on a W2K box.

 If a W2K box access multiple TSM servers (e.g. by switching/updating the
 DSM.OPT file), are the passwords to both TSM servers stored as seperate
 registry keys so as to not step on each other ?

 Or will this simply not work ?

 My problem is this.

 I need to move a node from one TSM server to another, via EXPORT NODE TO
 SERVER. The data movement will take a while and this will be done during
 the Thanksgiving holiday when everyone is gone, since I have to suspend
 backups during the export.

 I can remotely access and change the DSM.OPT file, and stop/start the
 scheduler service (via the TSMManager Agent) but I would not be able to
 answer the initial password prompt.

 So, my thought was to connect to the new TSM server, right now, and set
 the password, but go back to the old TSM server until next Wednesday.

 Will this work or am I just blowing smoke ???


Re: Windows PASSWORD and Multiple TSM servers

2004-11-19 Thread Zoltan Forray/AC/VCU
Thank you.  Just what I needed !




Andrew Raibeck [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
11/19/2004 02:27 PM
Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc

Subject
Re: Windows PASSWORD and Multiple TSM servers






See my post of 16 November on the Error writing registry password thread
where I indicate the location of the registry password.

As long as the TSM servers don't have duplicate server names, you can use
the same node name to access more than one server without having encrypted
passwords stepping on each other.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 11/19/2004
11:32:06:

 I have a question about how/where the TSM client stores the
 PASSWORDACCESS GENERATE password on a W2K box.

 If a W2K box access multiple TSM servers (e.g. by switching/updating the
 DSM.OPT file), are the passwords to both TSM servers stored as seperate
 registry keys so as to not step on each other ?

 Or will this simply not work ?

 My problem is this.

 I need to move a node from one TSM server to another, via EXPORT NODE TO
 SERVER. The data movement will take a while and this will be done during
 the Thanksgiving holiday when everyone is gone, since I have to suspend
 backups during the export.

 I can remotely access and change the DSM.OPT file, and stop/start the
 scheduler service (via the TSMManager Agent) but I would not be able to
 answer the initial password prompt.

 So, my thought was to connect to the new TSM server, right now, and set
 the password, but go back to the old TSM server until next Wednesday.

 Will this work or am I just blowing smoke ???


Re: linux restore problem

2004-11-19 Thread Richard Sims
I think Stef has the idea of what's wrong...
Are you attempting the restoral across different platforms?
As in where the target system does not support the file
system type that lived on the source system?
(Also: re filespace not found - please supply the full
message, including message number.)
   Richard Sims
On Nov 19, 2004, at 2:17 PM, Stef Coene wrote:
On Friday 19 November 2004 15:15, Otto Schakenbos wrote:
Richard, Mark, thnx for your answers.
I tried what you both suggested.
restore /daten/ -subdir=yes and restore '/daten/' -subdir=yes
all with the same result. (filespace not found)
What I also tried is copy the data (the orginal server is still up)
using nfs to the new server and back it up from the new server using a
new nodename.
If i try to restore now it works just fine.
I inspected the orginal directory and couldent't really find anything
weird. The only thing I can think of is that there is a directory
(/daten/public) which is a nfs mount to another server but this has
never been a problem in the past.
Mark, you are right that there are more then 95 dirs, just to let you 
know.
What is the output of
dsmc q filespaces
And can you try to restore it via the GUI ?
And what kind of file system are you using ?
Stef
--
[EMAIL PROTECTED]
 Using Linux as bandwidth manager
     http://www.docum.org/


Win2k and open file -- revisited

2004-11-19 Thread Sandra
Dear List,
I know there was a discussion on this topic but i am not able to find exactly
the one i am looking for. And Yes this exact topic was discussed, that I know.

I m running TSM Server 5.2.2.5 and Client 5.2.2 both on windows 2000.
I m using include.fs for snapshotcachelocation.
First time i took backup, and it succeeded for open files.
After reboot of machine, it is failing to take snapshot of open files. Nothing
was changed just a simple reboot.

Please enlighten me on this..

Sandra


Re: ksh here documents for sql cmds

2004-11-19 Thread Riley, Craig
You might also like to try the perl module TSM.pm located on CPAN. The modules 
provides very easy access to TSM .

-Craig Riley
The Children's Hospital in Denver


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Richard 
Rhodes
Sent: Friday, November 19, 2004 10:30 AM
To: [EMAIL PROTECTED]
Subject: ksh here documents for sql cmds


Be kind . . . . don't laugh tooo hard . . . .

I thought I'd share this.

I got tired coding dsmadmc cmd line sql cmds like the following in ksh scripts 
. . .

  dsmadmc -id=admin -password=admin -tab \
   select entity, count\(\*\) \
from summary \
where cast\(\(current_timestamp-start_time\)hours as decimal\(8,0\)\) \ 25 
\
group by entity having count\(\*\)\1 \
order by 2 desc

The admin ref manual doesn't say anything about dsmadmc accepting input from 
stdin, but I figured I'd try it anyway . . . it WORKED

dsmadmc -id=admin -password=admin -tab EOD
  select entity, count(*) \
  from summary \
  where cast((current_timestamp-start_time)hours as decimal(8,0))  25 \
  group by entity having count(*)1 \
  order by 2 desc
EOD

Now, doesn't that look much better?

ok . . you can stop laughing at me now . . .

Rick


-
The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.


DISCLAIMER:
CONFIDENTIALITY NOTICE:  The information contained in this message is legally 
privileged and confidential information intended for the use of the individual 
or entity named above. If the reader of this message is not the intended 
recipient, or the employee or agent responsible to deliver it to the intended 
recipient, you are hereby notified that any release, dissemination, 
distribution, or copying of this communication is strictly prohibited.  If you 
have received this communication in error, please notify the author immediately 
by replying to this message and delete the original message. Thank you.


TSM server start-up under Linux

2004-11-19 Thread Thomas Denier
We have installed a TSM 5.2.2 server under Suse Enterprise Server 8
running on zSeries hardware. The TSM code includes a script named
dsmserv.rc which accepts 'start' and 'stop' as arguments in the same
way as scripts Suse supplies to control built-in services. The dsmserv.rc
script does not include the comments insserv uses to determine the
order of execution of start-up and shut-down scripts. We would need to
have the DSMSERV_ACCOUNTING_DIR environment variable set before
dsmserv.rc executed dsmserv (the executable for the TSM server). We
also have a home-grown automation script for the TSM server that we
want started when Linux comes up. What is the best way to arrange for
all of this with minimal risk of breakage from future software
maintenance?


Oracle TDP and RMAN

2004-11-19 Thread Hart, Charles
In working with our Oracle DBA's they feel TSM should manage the RMAN Oracle 
Retentions etc...   Is there a Preferred method?  If so why?


Here's one of our DBA's response when I told them my understanding is that RMAN 
usually manages the Backup Retentions etc

Note on page 85 of the RMAN Backup and Recovery Handbook it says:

If you are using a tape management system, it may have its own retention 
policy.  If the tape management system's retention policy is in conflict with 
the backup retention policy you have defined in RMAN, the tape management 
system's retention policy will take precedence and your ability to recovery a 
backup will be in jeopardy.

That is why I was leaving it up to the Tivoli retention policy to be used 
instead of RMAN retention policies. This seems to be in conflict with the 
comment about TSM having a dumb repository.


Thanks for you thoughts...

Regards,

Charles 


Re: linux restore problem

2004-11-19 Thread Stef Coene
On Friday 19 November 2004 20:39, Richard Sims wrote:
 I think Stef has the idea of what's wrong...
 Are you attempting the restoral across different platforms?
 As in where the target system does not support the file
 system type that lived on the source system?
 (Also: re filespace not found - please supply the full
 message, including message number.)
In case of reiser or an other not-supported file system, you can get tsm 
client working with it if you create a virtual mount point of it.  

Stef

-- 
[EMAIL PROTECTED]
 Using Linux as bandwidth manager
     http://www.docum.org/


Re: Oracle TDP and RMAN

2004-11-19 Thread Stapleton, Mark
Please read the TDP for Oracle manual. It indicates that that you should
have your TSM retention set at 1,0,0,0. Set your Oracle management class
with those retentions, and Oracle will handle deletion of unneeded
backups.

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Hart, Charles
Sent: Friday, November 19, 2004 4:02 PM
To: [EMAIL PROTECTED]
Subject: Oracle TDP and RMAN

In working with our Oracle DBA's they feel TSM should manage 
the RMAN Oracle Retentions etc...   Is there a Preferred 
method?  If so why?


Here's one of our DBA's response when I told them my 
understanding is that RMAN usually manages the Backup 
Retentions etc

Note on page 85 of the RMAN Backup and Recovery Handbook it says:

If you are using a tape management system, it may have its 
own retention policy.  If the tape management system's 
retention policy is in conflict with the backup retention 
policy you have defined in RMAN, the tape management system's 
retention policy will take precedence and your ability to 
recovery a backup will be in jeopardy.

That is why I was leaving it up to the Tivoli retention policy 
to be used instead of RMAN retention policies. This seems to 
be in conflict with the comment about TSM having a dumb repository.


Thanks for you thoughts...

Regards,

Charles 



TSM client for Solaris x86

2004-11-19 Thread Lance Nakata
Has anyone heard if there are plans to release a TSM 5.x client for
Solaris x86 to complement the SPARC version?  If so, when might that
occur?

I've heard Sun say that many developers can take their SPARC source
code (except perhaps device drivers) and compile a working Solaris
app on Intel/AMD.  I'm hoping that's true for a Solaris x86 TSM
client.

Thanks,
Lance