AW: Looking for suggestions to speed up restore for a Windows server

2007-08-28 Thread Salak Juraj
I can second to Kelly.

On my old file server I had slow restores because of many files to create
even though my directories were kept on TSM on disk storage pool.
The bottleneck was file creation rate on file server.
Monthly image backups helped great.

Best
Juraj

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Kelly Lipp
Gesendet: Montag, 27. August 2007 23:40
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: Looking for suggestions to speed up restore for a Windows server

How about periodic Image backups of the file server volumes?  Couple that with 
daily traditional TSM backups and perhaps you have something that works out 
better at the DR site.

The problem is as you described it: lots of files to create.  Did you observe 
that you were pecking through tapes, or was the bottleneck at the file create 
level on the Windows box?  Or could you really tell?

Even if you create another pool for the directory data (which is easy to
implement) you would still have that stuff on many different tapes.
What about a completely new storage pool hierarchy for that one client?
And then aggressively reclaim the DR pool to keep the number of tapes at a very 
small number.

I'd really like to know where the bottleneck really was.  If it's file create 
time on the client itself, speeding up other things won't help.
If that's the case, then I like the image backup notion periodically.
Even if you did this once/month, the number of files that you would restore 
would be fairly small compared to the overall file server.  And the TSM client 
does this for you automagically so the restore isn't hard.

And this also brings up the fact that a restore of this nature in the a non DR 
situation probably isn't much better!

Thanks,

Kelly 


Kelly J. Lipp
VP Manufacturing  CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, 
Tom
Sent: Monday, August 27, 2007 12:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Looking for suggestions to speed up restore for a Windows 
server

We had our fall D/R hotsite test last week and all went well -- except for the 
recovery of our primary Windows 2003 file sharing system. It just takes WAY too 
long.

Part of the problem is the sheer number of files/directories per drive
-- I'm working with the Intel/Windows admin group to try some changes when we 
swap this system out in November.

Part of the problem is that the directory structure is scattered over a mass of 
other backups. I'm looking for suggestions on this.

The system is co-located by drive, but only for five of the nine logical drives 
on the system. I may have to bite the bullet and run all nine logical drives 
through co-location.

Is there any way to force the directory structure for a given drive to the same 
management class/storage pool as the data? I'm thinking I may have finally come 
up with a use for a second domain, with the default management class being the 
one that does co-location by drive. If I go this route -- how do I migrate all 
of the current data? Export/Import?
How do I clean up the off-site copies? Delete volume/backup storage pool?

I'm on TSM Server 5.3.2.0, with a 5.3 (not sure of exact level) client.

TIA

Tom Kauffman
NIBCO, Inc
CONFIDENTIALITY NOTICE:  This email and any attachments are for the exclusive 
and confidential use of the intended recipient.  If you are not the intended 
recipient, please do not read, distribute or take action in reliance upon this 
message. If you have received this in error, please notify us immediately by 
return email and promptly delete this message and its attachments from your 
computer system. We do not waive attorney-client or work product privilege by 
the transmission of this message.
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: Looking for suggestions to speed up restore for a Windows server

2007-08-28 Thread Salak Juraj
GREAT bird-view of approaching a problem! 

I *do* know that prior to solving a problem the problem per se has to be 
checked, but I oten forget to do that. You did not. Juraj

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Schaub, 
Steve
Gesendet: Dienstag, 28. August 2007 13:20
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: Looking for suggestions to speed up restore for a Windows server

Tom,

Having just gone through a similar scenario 2 weeks ago, here was my very 
non-technical fix:

(me) Hello, end-user?  I'm not going to be able to get your 800GB of data 
restored in 2 hours like you want.  Care to narrow down the restore to what you 
really need?
(end-user)  Oh.  Well, we really only need the 10GB of data in the  and 
 directories to run the important stuff.
(me)  Ok, done.

Maybe this wont apply to you, in which case the monthly image backup seems like 
a good suggestion.

Good luck,

Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-535-6574 (desk)
423-785-7347 (cell)
***public***


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, 
Tom
Sent: Monday, August 27, 2007 2:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Looking for suggestions to speed up restore for a Windows 
server

We had our fall D/R hotsite test last week and all went well -- except for the 
recovery of our primary Windows 2003 file sharing system. It just takes WAY too 
long.

Part of the problem is the sheer number of files/directories per drive
-- I'm working with the Intel/Windows admin group to try some changes when we 
swap this system out in November.

Part of the problem is that the directory structure is scattered over a mass of 
other backups. I'm looking for suggestions on this.

The system is co-located by drive, but only for five of the nine logical drives 
on the system. I may have to bite the bullet and run all nine logical drives 
through co-location.

Is there any way to force the directory structure for a given drive to the same 
management class/storage pool as the data? I'm thinking I may have finally come 
up with a use for a second domain, with the default management class being the 
one that does co-location by drive. If I go this route -- how do I migrate all 
of the current data? Export/Import?
How do I clean up the off-site copies? Delete volume/backup storage pool?

I'm on TSM Server 5.3.2.0, with a 5.3 (not sure of exact level) client.

TIA

Tom Kauffman
NIBCO, Inc
CONFIDENTIALITY NOTICE:  This email and any attachments are for the exclusive 
and confidential use of the intended recipient.  If you are not the intended 
recipient, please do not read, distribute or take action in

reliance upon this message. If you have received this in error, please notify 
us immediately by return email and promptly delete this message and its 
attachments from your computer system. We do not waive attorney-client or work 
product privilege by the transmission of this message.

Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


TSM comparisions (was: AW: Move TSM Database)

2007-08-14 Thread Salak Juraj
Based on this easy-going example I just want to add a notice 
to always returning comparision of TSM to another products.

This is for me on of example of strengths of TSM - performing many critical 
tasks simply and with little to no risk.

Task like this do not appear in comparisions,
But i sometimes learned that critical task performed much easier in TSM 
comparing to other products.

Just counting what I either did by myself or heard about in last few weeks:

 - move database to another volume
 - split database to more volumes
 - correct corrupted database (dsmserv auditdb)
 - extend Log after TSM stopped due to log full, extend DB after it has got full
 - restart large broken restore from checkpoint

From what I learned from my colleagues 
such kind of problems can result in much work and downtime in other products.

best regards
Juraj



-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Norita 
binti Hassan
Gesendet: Dienstag, 14. August 2007 06:32
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: Move TSM Database

Ok. Thanks.. I've succesfully done it. All this while I've been thinking of how 
to clear up my disk space.

Norita Hasan
- ICT Div -

-Original Message-
From: Andrew Carlson [mailto:[EMAIL PROTECTED]
Sent: Monday, August 13, 2007 12:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Move TSM Database

The way to do it with no downtime is to add the new volumes to the database, 
then delete the old volumes (through TSM of course).  TSM will then move the 
data for each deleted volume to space in the new disks.

On 8/12/07, Norita binti Hassan [EMAIL PROTECTED] wrote:

 Hi all,



 How to move TSM database to another location as I want to free some 
 disk space on existing volume.





 Pos Malaysia Berhad is Malaysia's national postal company
 Visit us online at www.pos.com.my

 NOTICE
 This message may contain privileged and/or confidential information. 
 If  you are  not the addressee  or authorised to  receive this email, 
 you must not use, copy,  disclose or take any  action based  on this 
 email. If you  have received this  email in  error, please advise  the 
 sender immediately by  reply e-mail and delete  this message.
 Pos Malaysia  Berhad takes  no responsibility  for  the contents of 
 this email.


 Email scanned and protected by POS Malaysia




--
Andy Carlson
---
Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License: $8.95/month, The 
feeling of seeing the red box with the item you want in it:Priceless.


Email scanned and protected by POS Malaysia



Pos Malaysia Berhad is Malaysia's national postal company
Visit us online at www.pos.com.my

NOTICE
This message may contain privileged and/or confidential information. If  you 
are  not the addressee  or authorised to  receive this email, you must not use, 
copy,  disclose or take any  action based  on this email. If you  have received 
this  email in  error, please advise  the sender immediately by  reply e-mail 
and delete  this message.
Pos Malaysia  Berhad takes  no responsibility  for  the contents of this email.
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: Getting more info: objects excluded by size

2007-08-13 Thread Salak Juraj
I second to Richard.
But this will take some time untill solved by Tivoli.

In the meantime you can do 
either 
ask Johndoe to mail you  list of all files larger then X on his 
mac,
where X is the max. file size limitation 
you have apparently set on your backup storage pools.
or
add PostSchedCMD to the Johndoe´s OPT file 
which will 
either 
mail / FTP you current dsmsched and dsmerror logs 
(the logs grep-ped for currend day to reduce the size 
of it)
or 
create a list of all files larger then X
and mail / FTP it to you.

Best
Juraj




-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Richard 
Sims
Gesendet: Freitag, 10. August 2007 13:09
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: Getting more info: objects excluded by size

On Aug 9, 2007, at 11:01 AM, Roger Deschner wrote:

 We occasionally see these messages in the server activity log:

 ANR0521W Transaction failed for session 911741 for node JOHNDOE
 (Mac) -
 object excluded by size in storage pool STGPOOL and all successor 
 pools.
 (SESSION: 911741)

 ...but that's all the information I get. ...

Roger -

I would recommend reporting this to TSM Support as a defect, in failing to 
provide substantive information about the situation.  At the very least, it 
should report the filespace, which the software surely knows, and the identity 
of the object.  Some of the programming in the product is very annoyingly 
deficient, needlessly so.

Richard Sims
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: Newbie question about version numbers and amount of backups

2007-07-31 Thread Salak Juraj
As Richard told, you can use TSM archives.

But you can scrutinise the statament about TSM only beeing able to store 5 to 7 
file version only.
This limitation is not a TSM technical limitation, TSM can store even unlimited 
amount of backup versions.
5 to 7 versions is merely a policy which your TSM administrator set-up using 
such values.
she does have simple possibility to create another policy (called management 
class / backup copy group)
And she can even apply it to your files only, leaving all other files with 5-7 
backup versions.
Sure, you must be able to define your files by either unique location and/or 
name pattern for this purpose.

If this policy change and consequent the slightly risen complexity of your TSM 
configuration
and the higher demand on TSM Hardware ressources pays for your business 
requirement,
only you and your TSM guy can decide.

It seem you are telling about 75 SQL server backups, 
in this case SQL backup agent should be considered.


best
Juraj




-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von 
miggiddymatt
Gesendet: Montag, 30. Juli 2007 21:49
An: ADSM-L@VM.MARIST.EDU
Betreff: Newbie question about version numbers and amount of backups

I haven't actually used TSM very much (my organization has an employee that 
just works with TSM), but I was wondering what solutions there were for 
implementing something similar to a grandfather/father/son rotation scheme for 
keeping files that my organization backs up.

I've asked our TSM person for suggestions, but she's telling me that our only 
option is to save 5 or 7 copies of a file.  
We don't have agents or the like for SQL servers, so currently we do a dump 
every night that gets backed up by TSM.  
For many of my systems, this means we only have a backup of 5 or 7 days.

Currently my only option is to append the date onto the dump which forces TSM 
to keep it as a seperate file each day.  Is there any way to just keep 
different dates of the same file without me having to go back to my 75 SQL 
servers and change the job so that the date is on the end of each filename?  
Thanks in advance for the help.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: JR- CAD will not start due to pre/post jobs

2007-07-24 Thread Salak Juraj
Well, while I thought your syntax should work, you might try following one: 

preschedulecmd C:\WINDOWS\SYSTEM32\cmd.exe /C c:\oraclexe\backup\shut_db.bat

best
Juraj 

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von JR Trimark
Gesendet: Dienstag, 24. Juli 2007 14:59
An: ADSM-L@VM.MARIST.EDU
Betreff: JR- CAD will not start due to pre/post jobs

I am using the TSM client 5.3 on a Windows 2003 server.  When I put in pre/post 
commands in the same format that I have used successfully before, the TSM 
Client Acceptor tries to start then stops.  If I comment out the pre/post 
commands the TSM Client Acceptor is able to start successfully.
No messages are written to any of the logs. The file path and batch file are 
both correct. Any ideas?
dsm.opt (partial)

preschedulecmd c:\oraclexe\backup\shut_db.bat
postchedulecmd c:\oraclexe\backup\start_db.bat

Thanks
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: SATA disk?

2007-07-12 Thread Salak Juraj
To get good random access (and sustained read/write) performance 
be sure to have command queuing (either TCQ or NCQ)
 - available, 
 - configured and 
 - working 
on all components used: Disk, controller and driver. 
The latest is particulary an issue with XP.

Still, similar 15 kRPM SCSI raid will easily outperform SATA,
So the true question is whether you need the high (SCS, FC, ..) performance at 
high costs or not.

We here estimated which sustained write rate we need,
found a supplier who promised us (a much better) througput,
so we made a contract with acceptancy check based, among other things, 
on the sustained write rate as well.
The real write rate was much smaller than promised but still much higher than 
required,
so we got an rebate and kept the 3 TB raid - and are happy with it.


Best 
Juraj

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Richard 
Rhodes
Gesendet: Mittwoch, 11. Juli 2007 20:40
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: SATA disk?

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 07/11/2007
02:01:16 PM:
 Has anyone used SATA drives for primary storage.  Is it really a bad
idea?

I've been using some SATA disks for a primary staging pool for a few months.
It was the only storage we had at the time . . . so I used it.  This was SATA 
in a EMC Clariion.  Config'ed as raid5

In general, I was quite happy with it's performance.

Now . . .the BUT's . . .

I grabed about 1tb across 30-40 lightly used spindles in multiple Raid5 
raidsets.  In other words, I had enough spindles to get the iops/throughput I 
needed.
According  to EMC's docs,  these SATA disks are capable of around 60 iops.

Rick


-
The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: Moving one nodes date from primary direct access disk

2007-06-20 Thread Salak Juraj
If you are not in hurry and your retention periods are not too long,
simply update that node´s backup storage groups to point to the new sequential 
access pool
(which is to be done anyway)
and have the your6TB in the old random pool expired with the time.

Best
Juraj

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Andrew 
Carlson
Gesendet: Dienstag, 19. Juni 2007 19:45
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: Moving one nodes date from primary direct access disk

This is an idea I had not thought of.  Thanks for the tip.

On 6/19/07, Thorneycroft, Doug [EMAIL PROTECTED] wrote:

 Maybe you could export/import the node.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf 
 Of Andrew Carlson
 Sent: Tuesday, June 19, 2007 8:45 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: Moving one nodes date from primary direct access disk


 The problem with that, is that it will migrate all date, not just this 
 one nodes data.

 On 6/19/07, Thorneycroft, Doug [EMAIL PROTECTED] wrote:
 
  Set your Sequential access pool as next pool on your random access 
  pool, then migrate the data.
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf 
  Of Andrew Carlson
  Sent: Tuesday, June 19, 2007 8:21 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Moving one nodes date from primary direct access disk
 
 
  I would like to move about 6TB of data for a node from Random Access
 disk
  to
  sequential accesss.  Since move nodedata doesn't work on random 
  access pools, I was wondering if there is an easy way to do it.  The 
  data is archive data, and the only plan I could come up with was 
  moving data
 from
  individual volumes to a staging area, a sequential access pool, then 
  moving nodedata from there to the final destination sequential 
  access pool.  Anyone have any better ideas?  Thanks for any input.
 
  TSM Server: 5.3.2.3
 
  --
  Andy Carlson
 
 
 --
 -
  Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License:
  $8.95/month,
  The feeling of seeing the red box with the item you want in
 it:Priceless.
 



 --
 Andy Carlson

 --
 - Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License:
 $8.95/month,
 The feeling of seeing the red box with the item you want in it:Priceless.




--
Andy Carlson
---
Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License: $8.95/month, The 
feeling of seeing the red box with the item you want in it:Priceless.
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: Point in time restore problem

2007-05-30 Thread Salak Juraj
Dear Paul,

you beeing unhappy with the product
may want to re-think either of usage of TSM at all at your site
or the necessity  of the MODE=ABSOLUTE backup.

I do not know anybody who generaly does regulare absolute backups in TSM for 
reasons you mentioned.
For me, I had performance problem when restoring desktop and \documents and 
settings\ data from PC-clients. 
In contrast toyour mailing it was not caused by traversing recovery log, but 
by search times on LTO tapes.
I solved my problem by placing backups of data concerned on a  TSM DISK-based 
storage pool.
This was quite simple  perfect solution for my scenarioproblem.

As far as I understood your epxlanation you merely  want to use absolute 
because you have been told that.. etc. etc.

I strongly believe that an general and unevaluated  beeing told that 
is neither a sound reason to express either positive or negative verdicts of 
anything, including but not restricted to TSM,
nor a wise motivation to do something in a particular way.

best regards

Juraj Salak, Austria



 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Paul Dudley
 Gesendet: Dienstag, 29. Mai 2007 01:49
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Point in time restore problem
 
 Well, I will give it a go, but this just confirms my belief 
 that TSM is the most user-unfriendly, frustrating, annoying, 
 unwieldy IT system I have encountered in 22 years of IT work.
 
 Regards
 Paul
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf 
  Of William Boyer
  Sent: Sunday, 27 May 2007 1:08 PM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] Point in time restore problem
 
  Instead of doing a SELECTIVE backup on a periodic basis, which won't
 update
  the last backup date/time of the filespace, use the 
 MODE=ABSOLUTE of 
  the backup copygroup. In your domain, make a copy of the 
 active policy 
  set and change all the management class backup copygroups to 
  MODE=ABSOLUTE instead of the default of MODIFIED.
  Then on your occasional timeframe, run an admin schedule 
 to activate 
  this policy set, do your backups which are incremental and
 then the
  next day run another admin schedule to activate your MODE=MODIFIED 
  policyset. This way your schedules don't change and as
 far
  as the client is concerned you just ran a unqualified INCREMENTAL 
  backup and the filespaces are updated. Since the active 
 policyset will 
  have ABSOLUTS, you'll get a copy of every file whether it's 
 changed or 
  not.
 
  I've been doing TSM not for over 8-years and this is the first time
 I've ever
  thought of a way to use multiple policyset definitions in a domain.
 
  Bill Boyer
  Backup my harddrive? How do I put it in reverse? - ??
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf 
  Of Paul Dudley
  Sent: Saturday, May 26, 2007 8:59 PM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: Point in time restore problem
 
  From what I read the standard incremental backup is 
 restricted in that
 it only
  backs up new or changed files since the last incremental backup.
 
  However I have been told that we need to run absolute incremental
 backups
  on a periodic basis - these incremental backups backup all files 
  whether they have changed or not, so that the Last Incr
 Date is
  updated, so that Point in time restores don't have to traverse 
  through a huge transaction log and spend long periods of time 
  restoring files that were later deleted.
 
  I quote from the dsmc help option for incremental backups:
 
  Mode:
  Permits you to back up only files that changed since the last backup
 (modified).
  Also permits you to back up the files whether they changed or not 
  (absolute).
 
  What I want to know is if you can run an absolute backup from the
 command
  line on the client server.
 
  The end result I want to achieve from all of this, is to run full
 backups on a
  periodic basis so that when I have to perform a Point in time 
  restore it does it quickly and does not have to
 traverse a huge
  transaction log and restore files that were later deleted.
 
  Regards
  Paul
 
 
 
 
 
 ANL - CELEBRATING 50 YEARS
 
 ANL DISCLAIMER
 This e-mail and any file attached is confidential, and 
 intended solely to the named addressees. Any unauthorised 
 dissemination or use is strictly prohibited. If you received 
 this e-mail in error, please immediately notify the sender by 
 return e-mail from your system. Please do not copy, use or 
 make reference to it for any purpose, or disclose its 
 contents to any person.
 
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.


AW: TSM only reads from COPY1 during DB backup

2007-04-26 Thread Salak Juraj
I don´t think one can generally state that.

If it were generally true than raid-1 controllers would have generally 
to be configured to read from one disk only 
in order , to paraphrase your words, 
not to cancell all gains obtained with disk-embedded read-ahead algorhytmus

I believe such test results strongly depends on many different configuration 
issues,
like raid implemented in one or two controllers,  channel architecture used 
channel amount, read block size, raid buffer size, disk buffer size, read-ahead 
alghorythmus
both on controller and on the disk, parameter setting etc..

Undoubtly, 
theoretical high performance limit reading from one disk is the disk´s 
streaming speed,
theoretical high performance limit reading from two disks is twice as high.

As always, in reality you only can converge to that limits but not reach them 
fully,
and the more complicated the solution is (e.g. 2 disks against 1 disk) 
the larger the gap between real performance to the theoretical limit will be.

The test mentioned by you apparently falls 50% under the theoretical 
performance limit,
whis is rather bad, so I believe there either must have been a configuration 
issue, 
or one of the used components was of insufficient quality (maybe the raid 
firmware).

best

Juraj








 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Paul van Dongen
 Gesendet: Mittwoch, 25. April 2007 21:55
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TSM only reads from COPY1 during DB backup
 
If it was't a dream, I remember reading something about 
 this issue. It stated that this behaviour was by design, 
 because reading blocks from both copies of a mirrored volume 
 cancelled all gains obtained with controllers and disk 
 subsistems read-ahead algorhythms.
I even remember reading it was actually tested.

 Paul
 
 -Original Message-
 From: ADSM: Dist Stor Manager on behalf of Orville Lantto
 Sent: Wed 4/25/07 16:22
 To:   ADSM-L@VM.MARIST.EDU
 Cc:   
 Subject:  Re: TSM only reads from COPY1 during DB backup
 
 Performance is the issue.  As tapes get faster and faster, 
 trying to get a db backup without shoe-shining the tape 
 drive gets harder.  Using storage sub-system mirroring is an 
 option, but not the recommended one for TSM.   Perhaps there 
 is a sound technical reason that db reads cannot be made from 
 both sides of the mirror, but it could be that it was just 
 programmer convenience.  Either way, I will have to design 
 around this feature to squeeze a bit more performance out 
 of my storage.
  
 Orville L. Lantto
 Glasshouse Technologies, Inc.
  
 
 
 
 From: ADSM: Dist Stor Manager on behalf of Richard Sims
 Sent: Wed 4/25/2007 10:49
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM only reads from COPY1 during DB backup
 
 
 
 On Apr 25, 2007, at 10:55 AM, Orville Lantto wrote:
 
  Reads should be safe from mirrored volumes and are commonly done in 
  operating systems to load  balance.  Not taking advantage of the 
  available IO resource is wasteful and puts an unnecessarily 
 unbalanced 
  load on an already IO stressed system.  It slows down db 
 backups too.
 
 Then your issue is performance, rather than database voracity.
 This is addressed by the disk architecturing chosen for the 
 TSM database, where raw logical volumes and RAID on top of 
 high performance disks accomplishes that.  Complementary 
 volume striping largely addresses TSM's symmetrical mirror 
 writing and singular reading.  Whereas TSM's mirroring is an 
 integrity measure rather then performance measure, you won't 
 get full equivalence from that.
 Another approach, as seen in various customer postings, is to 
 employ disk subsystem mirroring rather than TSM's application 
 mirroring.  In that way you get full duality, but sacrifice 
 the protections and recoverability which TSM offers.
 
 Richard Sims
 
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: Here is some very negative press about TSM

2007-02-21 Thread Salak Juraj
Folks,

someone already posted the answer to searchstorage.
I wanted to do as wall but was prohibited by our internet acces restrictions.

One trivial point still not mentioned in the posting is:

the reporter writes that TSM is slow in restores because of its incremental 
backup method,
and arguments that full backups do not have this problem.

Well then... TSM does have full backup option(s) as well, 
at least 3 of them: archive, selective, and image. 
So IF this is THE solution for the unknown expert he/she can simply use it WITH 
TSM.

best regards

Juraj





 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Kelly Lipp
 Gesendet: Donnerstag, 08. Februar 2007 20:15
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Here is some very negative press about TSM
 
 Folks,
  
 http://searchstorage.techtarget.com/originalContent/0,289142,s
 id5_gci124
 1036,00.html?track=sy60
  
 Interesting.  I would suggest a flood of email to her 
 suggesting she is full of baloney.
  
 We questioned our folks at IBM about this and did get a 
 response.  They were very disappointed by the tone of this 
 article but not surprised.
 Apparently this reporter is not a good friend.  Duh.  For IBM 
 to respond, however, would have provided more traction to the 
 story and nobody needed that.
  
 Kelly J. Lipp
 VP Manufacturing  CTO
 STORServer, Inc.
 485-B Elkton Drive
 Colorado Springs, CO 80907
 719-266-8777
 [EMAIL PROTECTED]
  
 
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: Performance with move data and LTO3

2006-12-22 Thread Salak Juraj
Hi! 
I saw some related information (source:Quantum) under 
http://www.datastor.co.nz/Datastor/Promotions.nsf/4a91ca5e06d20e15cc256ebe0002290e/d954d1c5e5e6df09cc25723b00740956/$FILE/When%20to%20Choose%20LTO3%20Tape%20Drives.pdf

best
Juraj 

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Henrik Wahlstedt
 Gesendet: Donnerstag, 21. Dezember 2006 15:43
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Performance with move data and LTO3
 
 Nice one! I get back on this after the Holidays.
 
 Thanks
 Henrik 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of Prather, Wanda
 Sent: den 19 december 2006 18:15
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: Performance with move data and LTO3
 
 Interesting.
 
 Differences in IBM vs. HP LTO3 drives:
 
I have been told that the IBM drives do smart 
 compression using a bypass buffer.  If a block of data is 
 going to expand during compression, the IBM drives will stop 
 compression and write the uncompressed block, which should 
 make them a bit faster.
 
 Re tape to tape operations:
 
I have observed the same behavior; tape to tape operations 
 are inexplicably slower than you would expect when the TSM 
 server is on WINDOWS.  I have observed this with fibre 
 drives, and SCSI drives, 3590 and LTO.  I suspect it has 
 something to do with buffer use, but since Windows provides 
 no tools whatever to measure performance of tape devices or 
 buses with tapes on them, I've never been able to make any 
 other determination.  
 
 I don't think it is a READ issue with the drives.  Try 
 testing using an AUDIT; that just reads the tape and doesn't 
 write anything.  I suspect you'll get faster READ times.  I 
 would be interested in seeing your results!
 
 
 
 Wanda Prather
 I/O, I/O, It's all about I/O  -(me)
 
 
  
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of Henrik Wahlstedt
 Sent: Thursday, December 14, 2006 10:37 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Performance with move data and LTO3
 
 Hi,
 
 I wonder what transfer rates (move data from drive to drive) 
 I am supposed to see with LTO3.
 
 I have two TSM servers, one 32-bit Win2k3 and one 64-bit 
 2.6.9-11.Elsmp, with a SL500 and FC LTO3 drives. 
 Similar HW (HP DL585) except for one server have HP- and the 
 other have IBM drives. Drives are on separate PCI busses.
 
 I used a dataset of 50Gb with large files, same file type on 
 both systems. Only scratch tapes and no expiration on the datasets.
 No other tape activity on the systems during the tests.
 
 I tested disk-mt0-mt1-mt2-mt3-mt1-mt0-disk
 From disk to tape I get a throughput of 74-76Mb/s with IBM 
 drives, (migration).
 From tape to tape, (move data), with HP drives I get a 
 throughput of 30-46Mb/s and with IBM drives I get 39-59Mb/s.
 From disk to tape, (move data), with IBM drives I get a 
 throughput of 44Mb/s.
 
 Apperently write speed seems OK but read spead is an issue?! 
 Or is this normal?
 
 
 
 Thanks
 Henrik
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 ---
 The information contained in this message may be CONFIDENTIAL 
 and is intended for the addressee only. Any unauthorised use, 
 dissemination of the information or copying of this message 
 is prohibited. If you are not the addressee, please notify 
 the sender immediately by return e-mail and delete this message.
 Thank you.
 
 
 ---
 The information contained in this message may be CONFIDENTIAL 
 and is intended for the addressee only. Any unauthorised use, 
 dissemination of the information or copying of this message 
 is prohibited. If you are not the addressee, please notify 
 the sender immediately by return e-mail and delete this message.
 Thank you.
 
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht 
gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorised 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.


AW: duplicate backup

2006-12-04 Thread Salak Juraj
I would strongly recommend to opt for second tape drive 
before  even thinking about workarounds.

Your custommer should understood that working with single drive only 
makes his backup and (TSM) management processes stop in case of single HW 
failure.

Even normal TSM management is pain with only one tape.

Each TSM installation should use 2 tape drives at least,
AFAIK this is even documented in TSM oficiall documents.

Maybe not what you wanted to hear,
but I am sure you will not make your custommer happy otherwise.

Praise your customer he is FULLY right when he wants to duplicate his backups
and make him clear there is no free lunch.


best regards


Juraj




 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Sasho Sterjovski
 Gesendet: Freitag, 24. November 2006 11:07
 An: ADSM-L@VM.MARIST.EDU
 Betreff: duplicate backup
 
 I have TSM5.3 on AIX and LTO3573 TS3100
 
 My customer wants to duplicate backup-data.I don't have DISK 
 Storage and only 1 tape drive, so I can have only primary 
 storage pool. 
 Can I beackup data from one node to two different tapes can I 
 tell TSM to backup data on tape that I specified
 


AW: TSM 5.3 - A part of Domain

2006-11-08 Thread Salak Juraj
AFAIK you can optionally use AD to help your TSM clients resolve name of TSM 
Server.
As you ask the question below I must asume you do not use this feature, 
so you have to take care about your OPT files - to point to correct TSM server 
name or IP address.

Btw. I do NOT join TSM Server into AD, 
because I believe it is more simple 
if TSM server is as much stand-alone as possible 
in case of disaster recovery of whole computing center.

best regards

Juraj


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Pranav Parikh
 Gesendet: Dienstag, 07. November 2006 10:05
 An: ADSM-L@VM.MARIST.EDU
 Betreff: TSM 5.3 - A part of Domain
 
 Hi,
 
 Please respond below query.
 
 We executes backup through Tivoli Storage Manager V5.3 
 (Windows based). We are planning to re-join existing TSM 
 server to a new domain. We would like to know:
 
 What impact will it have if the existing TSM server rejoins 
 another domain (in separate forest) Will it in anyway affect 
 the backup process (integration with previous
 backups)
 
 
 
 Regards
 Pranav
 


AW: Migrate Inactive Files Feature Proposal

2006-09-19 Thread Salak Juraj
having fairly large disk storage pools now (which was too expensive couple of 
years ago) I am the one who starts deploying this feature as soon as available

best
Juraj


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Thorneycroft, Doug
 Gesendet: Montag, 18. September 2006 21:34
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Migrate Inactive Files Feature Proposal
 
 I put out this idea a couple of years ago, It didn't generate 
 a whole lot of interest then.
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 Behalf Of Orville Lantto
 Sent: Monday, September 18, 2006 12:14 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Migrate Inactive Files Feature Proposal
 
 
 I had an idea over the weekend and would like some comment on 
 its feasibility.  With the growing need for 
 disk-to-disk-to-tape backup strategies, it would be very nice 
 to be able to migrate ONLY inactive copies of files out of a 
 storage pool.  I can come up with no reason that this should 
 be difficult or impossible to be added as a TSM feature.
  
 Any comments?
  
 Orville L. Lantto
 Glasshouse Technologies, Inc.
  
 


AW: Incremental change and Journaling

2006-07-17 Thread Salak Juraj
 to scan its filesystem, but it won't reduce the amount of 
 data you have to send over that leased line 

.. except for (not) transfring metadata at the begining of backup - which 
 can be be much - or not - depending on files count on filesystem in question

regards
Juraj

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Prather, Wanda
 Gesendet: Mittwoch, 05. Juli 2006 17:38
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Incremental change and Journaling
 
 Journaling will cut down the time it takes your remote client 
 to scan its filesystem, but it won't reduce the amount of 
 data you have to send over that leased line - which is really 
 causing your problem?
 
 Sounds like subfile backup might be more useful to you than 
 journaling.
 It will actually reduce the amount of data sent per day by 
 sending only the changed blocks instead of changed files 
 (unless the files themselves are  2 GB in size).  
 
 


AW: TSM diskpool on SATA

2006-06-21 Thread Salak Juraj
just beeing curious:

does your SATA controller and disks support NCQ/TCQ and ist either of them 
enabled?
Is write cache on Disks enabled?

I am only extrapolating from SCSI experiences - command queuing and disk write 
cache can slow-down the raid (if both inactive)

Juraj

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von John Monahan
 Gesendet: Dienstag, 20. Juni 2006 21:35
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TSM diskpool on SATA
 
  Did you do any FILE devclass work?  It sounds as though 
 your miserable 
  performance was head thrashing, which we'd get in multiple LV DISK 
  devclasses, and also in FILE devclasses.  Darn. :)
 
 No I was using a DISK devclass, but I'm almost positive it 
 wouldn't have made any difference with FILE devclasses.  Some 
 of the LUNs were faster than others (first ones in the RAID 
 group created) and as soon as I was writing to more than one 
 at a time my performance would tank even further, which 
 shouldn't be any different had I used FILE devclasses.  I 
 would have tried file devclasses with more time for 
 performance reasons, but I had spent too much time on it 
 already and the 150 MB/sec writes for my diskpool was 
 adequate for my needs and outpaced the capabilities of my 
 single Gigabit network connection to the TSM server, so I moved on.
 
 Hundreds of LVs with FILE devclasses on fibre disks would 
 probably do exactly what you are looking for and provide 
 excellent performance.  SATA just isn't good enough at 
 multiple, simultaneous I/Os to the same set of disks yet.
 


AW: AW: TDP for Exchange - Management Class

2006-05-25 Thread Salak Juraj
Hi!

 . So What the purpose of many management class per node utility If can't 
 ..
The purpose is to assign different backup/reposit attributes to respective 
objects,
but not to change  that attributes ferocious in time.

What you actually designed in the table below means your instruction for TSM 
are as follow:

january 1.st : Dear TSM, keep all *\...\full backups for that long time: 
%Sched_Exch_daily% 
january 2nd.: Dear TSM, keep all *\...\full backups for that long time: 
%Sched_Exch_daily%
...
january 31.st: Dear TSM, keep all *\...\full backups for that long time: 
%Sched_Exch_monthly%  
february 1.st : Dear TSM, keep all *\...\full backups for that long time: 
%Sched_Exch_daily% 
...
december 31.th: Dear TSM, keep all *\...\full backups for that long time: 
%Sched_Exch_yearly%  
january 1.st : Dear TSM, keep all *\...\full backups for that long time: 
%Sched_Exch_daily% 
...
What does TSM Server think about? I guess if TSM could speek, he´d tell:
Well, that guy is changing his own rules pretty often.
I do not understand what he wants to achieve, but he is the boss, I´ll simply 
do what he tells me to do.

:-)))


So again, what are the MGMT Classes good for:

INCLUDE *\...\fullEXCH_mine 
INCLUDE *\...\copy EXCH_another 
INCLUDE *\...\incremental   EXCH_another_one

and what are they NOT good for:

   INCLUDE *\...\full   EXCH_mine 
   INCLUDE *\...\full   EXCH_another 
   INCLUDE *\...\full   EXCH_another_one



 of many management class per node utility 
It´s wrong. 
It is not per node, it is per domain. Not utility but setting.
Using it  you can manage backup X-exchange servers while defining the rules 
(MGMT) once only.

-
hope it helps

best regards

Juraj Salak



Von: Robert Ouzen Ouzen
Gesendet: Sa 20.05.2006 10:56
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: AW: TDP for Exchange - Management Class


Hi ,

Became  a little bit confusing . So What the purpose of many management 
class per node utility If can't use for this purpose describe ?

DomainNodename MgmInclude   
  Schedule   Opt File

Domain_Exchange   Exchange EXCH_daily INCLUDE *\...\full EXCH_daily   
Sched_Exch_daily dsm.opt (Default)
Domain_Exchange   Exchange EXCH_monthly   INCLUDE *\...\full EXCH_monthly 
Sched_Exch_montlydsm_monthly.opt
Domain_Exchange   Exchange EXCH_yearlyINCLUDE *\...\full EXCH_yearly  
Sched_Exch_yearlydsm_yearly.opt 

If I understand correctly with this configuration every backup the files will 
be rebinding to the MGM in action !!

And the only way to achieve it , is to create also 3 different nodenames . 
Correct

So wasteful 

Regards Robert Ouzen

  
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Volker 
Maibaum
Sent: Friday, May 19, 2006 11:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: AW: TDP for Exchange - Management Class

Hi, 

thanks to all for the very helpful feedback!

I didn't think of using the copy backup for monthly and yearly backups. That 
will make it a lot easier

I guess that I will use the monthly policy for copy backups   
INCLUDE *\...\copy MONTHLY

And use a seperate dsm.opt file (dsm.yearly.opt) to bind the yearly copy 
backups to the proper management class.
C:\Programme\Tivoli\TSM\TDPExchange\start_full_yearly_backup.cmd
pointing to dsm.yearly.opt

regards, 

Volker


 
Am Freitag, den 19.05.2006, 11:34 +0200 schrieb Salak Juraj:
 Hi Del!
 
 I might be wrong because I do not use TDP 4 Mails by myself, I am only 
 RTFM, but I´d think about simplified solution 2 by Del:
 
 Background: 
 I think the only reason for having different requirements for monthly 
 an yearly backups is TSM storage space, if this were not a problem keeping 
 monthly backups for as long as yearly backups should be kept would be 
 preferrable.
 
 a) create only 1 NODENAME
 b) define 
   INCLUDE *\...\full  EXCH_STANDARD and maybe
   INCLUDE *\...\incr EXCH_STANDARD and maybe
   INCLUDE *\...\diff  EXCH_STANDARD
 appropriately to your regular (daily) backup requirements
 
 c) define
   INCLUDE *\...\copy EXCH_MONTHLY_AND_YEARLY appropriate to maximal 
 combined  requirements of your monthly AND yearly requirements AND 
 have EXCH_MONTHLY point to separate TSM storage pool (EXCH_VERYOLD)
 
 d) on regular basis (maybe yearly) check out all full tapes from EXCH_VERYOLD 
 storage pool from library.
 Disadvantage: reclamation of backup storage pool issues because of 
 offsite tapes in primary storage pool, but this can be solved as well.
 
 You will end with a bit less automated restore (only) for very old 
 data but with very clear and simple

AW: TDP for Exchange - Management Class

2006-05-19 Thread Salak Juraj
Hi Del!

I might be wrong because I do not use TDP 4 Mails by myself, I am only RTFM,
but I´d think about simplified solution 2 by Del:

Background: 
I think the only reason for having different requirements for monthly an yearly 
backups is TSM storage space,
if this were not a problem keeping monthly backups for as long as yearly 
backups should be kept would be preferrable.

a) create only 1 NODENAME
b) define 
INCLUDE *\...\full  EXCH_STANDARD and maybe
INCLUDE *\...\incr EXCH_STANDARD and maybe
INCLUDE *\...\diff  EXCH_STANDARD
appropriately to your regular (daily) backup requirements

c) define
INCLUDE *\...\copy EXCH_MONTHLY_AND_YEARLY
appropriate to maximal combined  requirements of your monthly AND yearly 
requirements
AND have EXCH_MONTHLY point to separate TSM storage pool (EXCH_VERYOLD)

d) on regular basis (maybe yearly) check out all full tapes from EXCH_VERYOLD 
storage pool from library.
Disadvantage: reclamation of backup storage pool issues because of offsite 
tapes in primary storage pool, 
but this can be solved as well.

You will end with a bit less automated restore (only) for very old data
but with very clear and simple concept for everyda/everymonth backup operations
and with more granularity (monthly) even for data older than a year.

I am interested in your thoughts and doubts about this configuration!

regards
Juraj




 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Del Hoobler
 Gesendet: Freitag, 12. Mai 2006 15:14
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TDP for Exchange - Management Class
 
 Hi Volker,
 
 Are you using separate NODENAMEs for each of the different 
 DSM.OPT files? If not, your solution won't do what you think.
 
 Data Protection for Exchange stores objects in the backup 
 pool, not the archive pool. That means, each full backup gets 
 the same TSM Server name (similar to backing the same file 
 name up with the BA Client.) It follows normal TSM Server 
 policy rules.
 That means, if you are performing FULL backups using the same 
 NODENAME, each time you back up with a different management 
 class, all previous backups will get rebound to that new 
 management class... just like BA Client file backups.
 Remember, this is standard behavior for BACKUP. You are 
 trying to get ARCHIVE type function, which won't work.
 
 Good news... there is a way to do exactly what you want...
 ... I have two ways to do it.
 
 Solution 1:
   Create a separate NODENAME for your 3 types of backups.
   For example:  EXCHSRV1, EXCHSRV1_MONTHLY, EXCHSRV1_YEARLY
   Have a separate DSM.OPT for each NODENAME, with the proper
   management class bindings. Set up your three schedules for
   your three separate nodenames.
 
 Solution 2:
   Create 2 separate NODENAMEs. Use one for the STANDARD and
   MONTHLY backups (perform COPY type backups for your MONTHLY 
 backups).
   Use the other nodename for the YEARLY backups.
   For example:  EXCHSRV1, EXCHSRV1_YEARLY
   Have one DSM.OPT for your STANDARD and MONTHLY backups and
   a different DSM.OPT for your YEARLY backups.
   In the DSM.OPT file for your STANDARD and MONTHLY backups,
   set up different policy bindings for FULL backups vs. COPY
   backups (since FULL and COPY get named differently on the
   TSM Server, they will also get their own policy.)
 
   Example DSM.OPT INCLUDE statements are like this:
   *---* The following example binds all FULL objects
   *---* to management class EXCH_STANDARD:
 INCLUDE *\...\full EXCH_STANDARD
 
   *---* The following example binds all COPY objects
   *---* to management class EXCH_MONTHLY:
INCLUDE *\...\copy EXCH_MONTHLY
 
 
 As far as your original question... you can check the 
 management class bindings by bringing up the Data Protection 
 for Exchange GUI... go to the restore tab, click on the 
 storage group you want to look at. It will show the 
 management class bindings. (Make sure to view active and 
 inactive, to see the previous backup bindings as well.) You 
 can also use the SHOW VERSION TSM Server command:
SHOW VERSION EXCHSRV1 *
SHOW VERSION EXCHSRV1_MONTHLY *
SHOW VERSION EXCHSRV1_YEARLY *
 This will show you the management class bindings.
 
 I hope this helps. Let me know if any of this isn't clear.
 
 Thanks,
 
 Del
 
 
 
 
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 05/12/2006
 03:03:17 AM:
 
  Hi,
 
  I want to do daily, monthly and yearly backups of our 
 Exchange Server.
  Therefore I defined three management classes:
  1) standard (for daily backups - 14 days retention)
  2) monthly (365 days retentions, backup once a month)
  3) yearly (5 years retention, backup once a year)
 
  I also defined three schedules on the server side, starting three 
  different command files on our exchange server which are using 
  different dsm.opt files.
 
  I now want to check if the backups are bound to the correct 
 

AW: AW: TDP for Exchange - Management Class

2006-05-19 Thread Salak Juraj
Hi!

I suspect this will not work the way you like!

Running C:\Programme\Tivoli\TSM\TDPExchange\start_full_yearly_backup.cmd
will, up to my knowledge, rebind ALL existing TDP COPY backups to the 
management class according to dsm.yearly.opt. Correct me if I am wrong.
This is not a problem so far.

But.. Running regular monthly backup afterwards, using regular dsm.opt,
will - again - rebind ALL existing TDP COPY backups 
according to dsm.opt, thus effectively shortening the life of your yearly 
backups
to whatewer you defined in dsm.opt, probably one year only.

This is why I suggested to keep all monthly backups for as long as you want to 
keep yearly backups,
thus effectively replacing yearly backups by long-living monthly backups.
(you will NOT need separate yoarly backups)
Advantage: you will be able to restore e.g. 1.st april backup few years back.
Disadvantage: more space on TSM tapes, thus the suggestion of cheking old tapes 
out.

best
Juraj Salak

 

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Volker Maibaum
 Gesendet: Freitag, 19. Mai 2006 11:57
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: AW: TDP for Exchange - Management Class
 
 Hi, 
 
 thanks to all for the very helpful feedback!
 
 I didn't think of using the copy backup for monthly and 
 yearly backups. That will make it a lot easier
 
 I guess that I will use the monthly policy for copy backups   
   INCLUDE *\...\copy MONTHLY
 
 And use a seperate dsm.opt file (dsm.yearly.opt) to bind the 
 yearly copy backups to the proper management class.
 C:\Programme\Tivoli\TSM\TDPExchange\start_full_yearly_backup.cmd
 pointing to dsm.yearly.opt
 
 regards, 
 
 Volker
 
 
  
 Am Freitag, den 19.05.2006, 11:34 +0200 schrieb Salak Juraj:
  Hi Del!
  
  I might be wrong because I do not use TDP 4 Mails by 
 myself, I am only 
  RTFM, but I´d think about simplified solution 2 by Del:
  
  Background: 
  I think the only reason for having different requirements 
 for monthly 
  an yearly backups is TSM storage space, if this were not a 
 problem keeping monthly backups for as long as yearly backups 
 should be kept would be preferrable.
  
  a) create only 1 NODENAME
  b) define 
  INCLUDE *\...\full  EXCH_STANDARD and maybe
  INCLUDE *\...\incr EXCH_STANDARD and maybe
  INCLUDE *\...\diff  EXCH_STANDARD
  appropriately to your regular (daily) backup requirements
  
  c) define
  INCLUDE *\...\copy EXCH_MONTHLY_AND_YEARLY 
 appropriate to maximal 
  combined  requirements of your monthly AND yearly requirements AND 
  have EXCH_MONTHLY point to separate TSM storage pool (EXCH_VERYOLD)
  
  d) on regular basis (maybe yearly) check out all full tapes 
 from EXCH_VERYOLD storage pool from library.
  Disadvantage: reclamation of backup storage pool issues because of 
  offsite tapes in primary storage pool, but this can be 
 solved as well.
  
  You will end with a bit less automated restore (only) for very old 
  data but with very clear and simple concept for everyda/everymonth 
  backup operations and with more granularity (monthly) even 
 for data older than a year.
  
  I am interested in your thoughts and doubts about this 
 configuration!
  
  regards
  Juraj
  
  
  
  
   -Ursprüngliche Nachricht-
   Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
   Auftrag von Del Hoobler
   Gesendet: Freitag, 12. Mai 2006 15:14
   An: ADSM-L@VM.MARIST.EDU
   Betreff: Re: TDP for Exchange - Management Class
   
   Hi Volker,
   
   Are you using separate NODENAMEs for each of the 
 different DSM.OPT 
   files? If not, your solution won't do what you think.
   
   Data Protection for Exchange stores objects in the backup 
 pool, not 
   the archive pool. That means, each full backup gets the same TSM 
   Server name (similar to backing the same file name up with the BA 
   Client.) It follows normal TSM Server policy rules.
   That means, if you are performing FULL backups using the same 
   NODENAME, each time you back up with a different 
 management class, 
   all previous backups will get rebound to that new management 
   class... just like BA Client file backups.
   Remember, this is standard behavior for BACKUP. You are trying to 
   get ARCHIVE type function, which won't work.
   
   Good news... there is a way to do exactly what you want...
   ... I have two ways to do it.
   
   Solution 1:
 Create a separate NODENAME for your 3 types of backups.
 For example:  EXCHSRV1, EXCHSRV1_MONTHLY, EXCHSRV1_YEARLY
 Have a separate DSM.OPT for each NODENAME, with the proper
 management class bindings. Set up your three schedules for
 your three separate nodenames.
   
   Solution 2:
 Create 2 separate NODENAMEs. Use one for the STANDARD and
 MONTHLY backups (perform COPY type backups for your MONTHLY 
   backups).
 Use the other nodename for the YEARLY backups.
 For example:  EXCHSRV1, EXCHSRV1_YEARLY
 Have one DSM.OPT

AW: Library full

2006-05-02 Thread Salak Juraj
just to be sure: you are aware of MOVE MEDIA command, aren´t you?
regards
Juraj
 

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Meadows, Andrew
 Gesendet: Dienstag, 02. Mai 2006 18:33
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Library full
 
 Thank you everyone for their response to this. 
 The q med looks to be exactly what I was looking for. 
 
 Thanks,
 Andrew
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of fred johanson
 Sent: Tuesday, May 02, 2006 10:38 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: Library full
 
 What you want is Q MED.  Look in the help for that command. 
  The basic that you need is DAYS= but there are other 
 limiters you can add.
 
 
 At 09:31 AM 5/2/2006 -0500, you wrote:
 I have a 3584 library that us now full to the brim. I have 
 been trying 
 to create a select statement that will let me move tapes 
 that haven't 
 been accessed in X days out of the library. Anyone have a 
 script like 
 this already? I would be happy with just identifying the tapes and 
 manually moving them
 Thanks in advance,
 Andrew.
 
 This message is intended only for the use of the Addressee and may 
 contain information that is PRIVILEGED and CONFIDENTIAL.
 
 If you are not the intended recipient, you are hereby 
 notified that any 
 dissemination of this communication is strictly prohibited.
 
 If you have received this communication in error, please erase all 
 copies of the message and its attachments and notify us immediately.
 
 Thank you.
 
 
 Fred Johanson
 ITSM Administrator
 University of Chicago
 773-702-8464
 
 This message is intended only for the use of the Addressee 
 and may contain information that is PRIVILEGED and CONFIDENTIAL.
 
 If you are not the intended recipient, you are hereby 
 notified that any dissemination of this communication is 
 strictly prohibited.
 
 If you have received this communication in error, please 
 erase all copies of the message and its attachments and 
 notify us immediately.
 
 Thank you.
 
 


AW: AW: Throttling back TSM client

2006-05-02 Thread Salak Juraj
thnx!! 

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Richard Sims
 Gesendet: Freitag, 28. April 2006 14:45
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: AW: Throttling back TSM client
 
 On Apr 28, 2006, at 6:46 AM, Salak Juraj wrote:
 
  :-)
 
  is full list  of Raibeck´s Rule available?
  Sound like new Murphy!
 
 Some of them:
 http://tsm-symposium.oucs.ox.ac.uk/2001/papers/Raibeck.Diagnostics.PDF
 http://tsm-symposium.oucs.ox.ac.uk/papers/TSM%20Client%20Diagnostics%
 20(Andy%20Raibeck).pdf
 


AW: Throttling back TSM client

2006-04-28 Thread Salak Juraj
:-)

is full list  of Raibeck´s Rule available?
Sound like new Murphy!
best
Juraj


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Andrew Raibeck
 Gesendet: Dienstag, 25. April 2006 22:57
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Throttling back TSM client
 
  Seems that every action has some re-action.
 
 Sounds like Raibeck's Rule #37: There is no free lunch   :-)
 
 Regard,
 
 Andy
 
 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development Internal Notes 
 e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: 
 [EMAIL PROTECTED]
 
 IBM Tivoli Storage Manager support web page:
 http://www-306.ibm.com/software/sysmgmt/products/support/IBMTi
 voliStorageManager.html
 
 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.
 
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2006-04-25
 13:40:27:
 
  Thanks Richard, et al,
 
  I thought the default RESOURCEUTIL was also the minimal value, so I 
  don't think we could lower that any more than it already is.  Using 
  MEMORYEFFICIENTBACKUP is a good idea, as is using 'nice' to
 deprioritize.
 
  We have actually been working hard to improve TSM 
 performance so that 
  we can restore data more quickly.  Seems that every action has some 
  re-action.  Reducing the TSM Server as a bottleneck serves 
 to move the 
  bottleneck to the client, where it can interfere with other
 applications.
 
  ..Paul
 
  At 02:37 PM 4/25/2006, Richard Sims wrote:
  Certainly, de-tuning the TSM backups will reduce the 
 impact, where 
  the most obvious tactic is to minimize 
 RESOURceutilization. And you 
  can get more drastic via MEMORYEFficientbackup Yes. Depending upon 
  the file population, the influx of the Active files list at the 
  beginning of an incremental will always have a fixed 
 impact. Beyond 
  that, you can deprioritize the TSM client process at the OS level.
 
 
  --
  Paul ZarnowskiPh: 607-255-4757
  Manager, Storage Systems  Fx: 607-255-8521
  719 Rhodes Hall, Ithaca, NY 14853-3801Em: [EMAIL PROTECTED]
 


AW: Throttling back TSM client

2006-04-28 Thread Salak Juraj
.. in addition to prior advices 
lowering the runtime priority of tsm service 
might (or might not) help - depending on other factors.

On my laptop manualy lowering TSM Scheduler priority does help much, 
but here is the hard disk the bottleneck, not the network.
Anyway, if less CPU causes TSM to generate disk access less often, 
the same could be valid for network access.
.. Unless TSM is the only application significantly coping for CPU ressourcen 
at the given time.

Unfortunately, there is up to my knowledge no windows native method for 
permanent change of task/service priority,
so you´d have to play around third party tools, like 
http://www.prnwatch.com/prio.html.
I have NOT tried this tool in conjuction wit TSM Scheduler, but ist should work.

regards
Juraj


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Paul Zarnowski
 Gesendet: Dienstag, 25. April 2006 20:14
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Throttling back TSM client
 
 We have a small number of users here who are complaining that 
 when a TSM backup runs on their client system, it monopolizes 
 use of their network card.  They are looking for a way to 
 throttle back TSM's use of the network.  Has anyone else run 
 into this?  Any ideas?  The only thing I've come up with is 
 to configure a secondary NIC at 10MB/s and point these users 
 at that card.  This seems crude to me, so other ideas would 
 be welcome.
 
 ..Paul
 
 
 --
 Paul ZarnowskiPh: 607-255-4757
 Manager, Storage Systems  Fx: 607-255-8521
 719 Rhodes Hall, Ithaca, NY 14853-3801Em: [EMAIL PROTECTED]
 


AW: Reclamation

2006-04-28 Thread Salak Juraj
Hi,
does it work fork for copy pools as well??

I tried and got an error:
 
tsm: AOHBACKUP01define stgpool ReclMyCopyPool reclaim1 desc=reclamation pool 
for MyCopyPool  nextstg=MyCopyPool maxscratch=2
ANR2387E DEFINE STGPOOL: Storage pool MYCOPYPOOL is not a primary pool.
ANS8001I Return code 3. 

regards
juraj


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Michal Mertl
 Gesendet: Dienstag, 25. April 2006 22:02
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Reclamation
 
 Marc Crombeen wrote:
  Anyone have an idea of how to use a disk pool to do reclamation for 
  tape storage instead of using 2 tape drives?
 
 I would have guessed this is very common configuration. You 
 setup reclaim storage pool (e.g. 'define stgp diskreclaim 
 filedev maxscratch=10 nextpool=tapepool') and set it as 
 reclaim pool on your tape storage pool ('update stgppool 
 tapepool reclaimstgpool=diskreclaim').
 
 The data to be reclaimed doesn't necessarily have to fit to 
 the reclaim storage pool. TSM will copy from tape to be 
 reclaimed until the reclaim pool is full. When you make up 
 some space in it (by migration to another tape in disk pool) 
 TSM will continue.
 
 
  Thanks
  Marc
 
  DISCLAIMER:
 
  If you are not the intended recipient of this transmission, you are 
  hereby notified that any disclosure or other action taken 
 in reliance on its contents is strictly prohibited.
  Please delete the information from your system and notify 
 the sender 
  immediately. If you receive this email in error contact the 
 County of 
  Lambton at 519 845 0801 extension 405 or email 
 [EMAIL PROTECTED]
 


AW: Disaster recovery of Windows 2003 Server

2006-03-19 Thread Salak Juraj
Redpiece:
IBM Tivoli Storage Manager: Bare Machine Recovery for Microsoft Windows 2003 
and XP
 
http://www.redbooks.ibm.com/abstracts/redp3703.html?Open
 
regards
Juraj
 



Von: ADSM: Dist Stor Manager im Auftrag von Doyle, Patrick
Gesendet: Mo 13.03.2006 15:17
An: ADSM-L@VM.MARIST.EDU
Betreff: Disaster recovery of Windows 2003 Server



Has there been any updates to the disaster recovery documents for 2003
server?

The following refer to Windows 2000 only,

Disaster Recovery Strategies with Tivoli Storage Management
http://www.redbooks.ibm.com/redbooks/pdfs/sg246844.pdf

Summary BMR Procedures for Windows NT and Windows 2000 with ITSM
http://www.redbooks.ibm.com/abstracts/tips0102.html?Open


In particular, references to dsmc restore systemobject seem to be
obsolete. TSM Client 5.3.2.0 now sees system services and system
state as replacements for systemobject.

Is anyone aware of an update?

Regatds,
Pat.


Zurich Insurance Ireland Limited t/a Eagle Star is regulated by the Financial 
Regulator
**
The contents of this e-mail and possible associated attachments
are for the addressee only. The integrity of this non-encrypted
message cannot be guaranteed on the internet.
Zurich Insurance Ireland Limited t/a Eagle Star is therefore
not responsible for the contents. If you are not the intended recipient,
please delete this e-mail and notify the sender
**


AW: restore file permissions

2006-01-27 Thread Salak Juraj
For now, I am afraid you are lost.

For future, should you mean this situation could repeat, 
you may want to check Intensive Care Utilites under
http://www.liebsoft.com/index.cfm/server_products

Among other things, 
this tool can export all access rights from your server or from your domain,
you can edit this rights or keep unchanged, 
and restore them (or some of them).
I used this tool on previous site and i found it to be a high quality and 
flexible tool.

best regards

Juraj

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Volker Maibaum
 Gesendet: Donnerstag, 26. Jänner 2006 16:34
 An: ADSM-L@VM.MARIST.EDU
 Betreff: restore file permissions
 
 Hello,
 
 some weeks an administrator nearly messed up all file 
 permissions on one of our unix servers by issuing a recursive 
 chmod in a wrong directory.
 The problem was resolved by manually fixing the file permissions.
 
 Is there also a way to use tsm to restore the file 
 permissions without doing a restore of all files? Or is there 
 a way to read the file permissions from the tsm database with sql?
 
 regards,
 
 Volker Maibaum
 


AW: performance scaling question

2006-01-06 Thread Salak Juraj
Adding a second one.
TSM is good at using 2 CPU´s, so, very roughly speaking, 
you will have 2x2.8=5,6 comparing to 3.2.
Common limitations for paralell processing applies.
 
regards
juraj
 
 



Von: ADSM: Dist Stor Manager im Auftrag von Troy Frank
Gesendet: Do 05.01.2006 22:20
An: ADSM-L@VM.MARIST.EDU
Betreff: performance scaling question



My server is an IBM x235 with a single Xeon 2.8ghz (533mhz fsb), running
windows2003.  I've noticed that the cpu gets pretty consistently pegged
every night during backups, and I was wondering what would do more
good replacing the single 2.8 with a 3.2, or leaving the 2.8 and
adding a second one?



Confidentiality Notice follows:

The information in this message (and the documents attached to it, if any)
is confidential and may be legally privileged. It is intended solely for
the addressee. Access to this message by anyone else is unauthorized. If
you are not the intended recipient, any disclosure, copying, distribution
or any action taken, or omitted to be taken in reliance on it is
prohibited and may be unlawful. If you have received this message in
error, please delete all electronic copies of this message (and the
documents attached to it, if any), destroy any hard copies you may have
created and notify me immediately by replying to this email. Thank you.


AW: Splitting a MS-Windows node filesystem?

2006-01-06 Thread Salak Juraj
Hi,
 
you can define a new management class with longer time periods and/or version 
count values
for deleted data. This class is meant solely for managing directories to be 
moved from X: to Y:.
Bind this class to the directories in question while they are still on X:
and do an usual incremental backup.
 
Do not forget to check TSM client settings (e.G. Domain) for Y:.
Move the data to Y: and voila - the job is done.
 
Advantage: no export node necessary
Disadvantage: during restores you must search after files on 2 locations (Y: 
and X:).
Your problem of doupbled space usage is not solved.
 

In your scenario, you can shorten the period for double space requirements:
after having done export you can do 
RESTORE -LATEST -replace=newer (check for the correct syntax , please)
to the original location (only for directories to be moved to y:),
bind those directories to a management class with very small time/version count 
values,
do an INCRemental, 
delete the data on X:, 
do a new INCRemental, 
and EXPIRE the database.
 
regards
juraj
 
 
regards
juraj

 


Von: ADSM: Dist Stor Manager im Auftrag von Kauffman, Tom
Gesendet: Do 05.01.2006 15:43
An: ADSM-L@VM.MARIST.EDU
Betreff: Splitting a MS-Windows node filesystem?



It's a new year, and the new questions are crawling out of the wood-work
here.

We have an NT fileserver with four 'filesystems' (drives), co-located by
filesystem. It's about to be replaced, and in the process the 'X:' drive
will be split into the 'X:' and'Y:' drives. How do I handle this at the
server level so that I don't loose backup generations?

It looks like I need to do an export node with the filespace specified,
rename the filespace on the node, and then do an import -- this will, of
course, double the data until my expire inventory reaches the
retainextra/retainonly limits after the first backup.

Is there a better way?

TIA

Tom Kauffman
NIBCO, Inc


AW: TSM Reporting tools

2005-12-29 Thread Salak Juraj
This Text will be saved in my computer museum 
among other basic literature pieces, like real programmers write in Fortran 
Really good reading - thx!
Juraj


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Allen S. Rout
 Gesendet: Donnerstag, 29. Dezember 2005 04:57
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TSM Reporting tools
 
  On Thu, 29 Dec 2005 10:53:49 +1000, Steven Harris 
 [EMAIL PROTECTED] said:
 
 
  I like your graph, would it be too much to ask for you to 
 explain your 
  charging scheme?  That is always a vexed issue particularly 
 with those 
  managers who make unreasonable volume or retention demands, 
 and giving 
  them what they want, but charging them what it really costs for the 
  service is a nice stick to be able to beat them with.
 
 
 Not at all; I can hold forth on this at some length. 
 (shocked, I know you are)
 
 
 Billing is always an exercise in shared fantasy.  Pretending 
 that this is not so will only frustrate you.  You have to 
 start with a set of principles which are politically 
 acceptable, and then do rigorous math from those principles, 
 ignoring that they may be insane from an engineering standpoint.
 
 My organization is (was) a so-called Auxilliary of the State 
 of Florida.  We were mostly a State agency, but we had 
 special accounting rules that permitted us to have bank 
 accounts and retain money across years.  This was because, as 
 a 'data center', we were expected to regularly make purchases 
 which were far in excess of a given year's budget.  We were 
 also permitted to bill other state agencies for our services, 
 in real dollars, which we put into banks, and periodically 
 bought new mainframes.
 
 My politically-acceptable fantasy principles were as follows:
 
 + Recover costs
 + Recover enough that the service can be prepared for growth Don't 
 + recover more than that
 
 What are the costs?  We had a director-level employee whose 
 entire writ was answering that question; the cost evaluation 
 experience was fascinating and educational, and not a litle 
 painful.  Your organization will have its' own standards for 
 things like amortization, measuring benefits overhead for 
 staff time, redistribution of front-office costs, cubage in 
 the machine room, power and air handling, etc. etc.  But we 
 have ours, and at the end of the day
 (koff-months-koff) it came down to a bottom line of our 
 annualized costs.
 
 These principles are insane.  Most of our computer equipment 
 is amortized at 4 years.  For the library chassis, we're at 8 
 and counting.  For the CPUs, we're lucky if we make it to 3.  
 Disk is all over the map, and how do you note that my TSM 
 disk is often cast-offs from another project?  Insane.  But 
 it matches the principles, so soldier on.
 
 
 That was the number I was supposed to recover.  Now, how 
 should I split it up?
 I started out taking every measurement I could get TSM to 
 cough up (and we all know that we can measure it MANY ways) 
 and trying to assign numbers to them all.  This was hideously 
 complex, and incomprehensible even to the architect.
 
 I went through several iterations of simplification, and 
 finally realized that I had the wrong end of it: The metrics 
 for charging must be easy to measure, but that's only 
 necessary, not sufficient.  The important measures are ones 
 over which the clients, the end-users, have direct control.
 
 They don't care about tapes, they don't need to know when I 
 move from 3590-J to K, or B to E, or to 3592s.  And if I bill 
 them for frational-tapes, then they will be aware when I 
 change underlying system structure.  Ew.
 
 I ended up with basically two numbers: Transfer, and storage. 
  Transfer is upload (backup, archive) and download (restore, 
 retrieve), as measured by the acctlogs.  Storage is whatever 
 Q occ comes up with (or actually a select, but you get the idea).
 
 
 Pleasantly, I'm a pack rat, and retain logs and accounting 
 logs. Every time I came up with a new set of rates, I could 
 re-apply them to the last [period] of data.  This let me 
 start twiddling rates, seeing how I could weight billing 
 towards one or the other.
 
 I decided that I would start off aiming for total storage 
 charges to be about equal to total transfer charges; that is, 
 when I add up a years' bills, about half of it should be in 
 each category.  As time has passed and costs have gone down, 
 I've nudged one or the other rate, mostly focusing on how I'd 
 be changing user behavior.  If people are keeping too much 
 stuff around (I had one customer seriously claim they wanted 
 40 copies) then the transfer cost goes down instead of the storage.
 
 
 I presented the completed charge algorithm to management, and 
 it passed muster.  I presented it to customers, and they 
 screamed.  We changed the inputs.  Less fantasy FTE, 
 different fantasy purchasing behavior, less theoretical total 
 cost.  This is where I 

AW: Automated restore

2005-12-29 Thread Salak Juraj
Hi,

staying on this NT tool-wave - I use www.superflexible.com which is cheap, 
small, powerfull, reliable, otionally working like NT service, and has very 
good support
best
Juraj



 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von David W Litten
 Gesendet: Mittwoch, 28. Dezember 2005 19:48
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Automated restore
 
 Joe,
 
 If this is an NT server you may have better luck with using a 
 data sync tool like Robocopy, instead of doing incremental restores.
 
 david
 
 
 
 
  Joe Crnjanski
  [EMAIL PROTECTED]
  ITYNETWORK.COM  
   To
  Sent by: ADSM:   ADSM-L@vm.marist.edu
  Dist Stor
   cc
  Manager
  [EMAIL PROTECTED]
  Subject
  .edu [ADSM-L] Automated restore
 
 
  12/28/2005 01:16
  PM
 
 
  Please respond to
  ADSM: Dist Stor
  Manager
  [EMAIL PROTECTED]
.edu
 
 
 
 
 
 
 Hello,
 
 I have a server at customer locations that's running backup 
 every night.
 I want to automate restore process at backup site. This is a 
 server that they will use in case of disaster in office. I'm 
 doing backup every night at office and I want to do 
 'incremental' restore at backup site, so this server will be 
 'synchronized ' with office server.
 
 I wanted to install scheduler with restore as action in 
 define schedule command. Then I realized that server in 
 office will see this schedule and run it. I'm using same node 
 name on both servers. It is kind of logical problem (chicken 
 and egg.) If I don't use same nodename, I won't be able to restore.
 
 Regards,
 
 Joe Crnjanski
 Infinity Network Solutions Inc.
 Phone: 416-235-0931 x26
 Fax: 416-235-0265
 Web: www.infinitynetwork.com
 


AW: Normal # of failures on tape libraries

2005-12-22 Thread Salak Juraj
I agree with previous answers, and I second to envirinmental note:

Is your environment fine?
Stable temperature  humidity within allowed limits, no vibrations, dustless, 
pure sinus stable voltage mains?
Evere tried another tape manufacturer?

regards
J.S.


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Dennis Melburn W IT743
 Gesendet: Dienstag, 13. Dezember 2005 17:31
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Normal # of failures on tape libraries
 
 Our sites use ADIC Scalar 1Ks as well as one ADIC 10K.  The 
 Scalar 1Ks have  4 LTO1 drives in each and the 10K has 34 
 LTO2 drives.  We experience occasional failures on these 
 drives and have to replace them.
 My question is, is it normal for a site that has alot of 
 drives to experience drive failures about every 1-1.5 months? 
  My manager is rather annoyed at the fact that it seems that 
 we are constantly replacing drives even though it doesn't 
 cause any downtime for our TSM servers while they are being 
 replaced.  If this is a normal part of having tape libraries 
 then that is fine, but I don't have enough experience in this 
 field to say either way, so that is why I am asking all of you.
  
  
 Mel Dennis
 
 


AW: manual tape drive

2005-12-22 Thread Salak Juraj
this one is  trivial, but..:  when labeling tapes check the activity log for 
success/unsuccess messages. Maybe overwrite=yes is missing, or the 
write-protect tab is set.

regards
Juraj


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Gill, Geoffrey L.
 Gesendet: Montag, 19. Dezember 2005 16:54
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: manual tape drive
 
 I guess I should have passed on that I pre labeled one tape 
 and when TSM kicked off a db backup it asked for a tape, 
 which I inserted, but TSM never recognized it. A migrate did 
 the same thing. Since there is only one drive at least I know 
 I didn't screw that up...
 I also have read via messages and info on the admin console 
 screen in 5.3 that scratch tapes need to be checked in. It 
 sort of tells you how to do this but locating it hasn't been 
 possible yet.
 
 Well I'll give it another try and see what happens.
 
 Thanks,
 
 Geoff Gill
 TSM Administrator
 SAIC M/S-G1b
 (858)826-4062
 Email: [EMAIL PROTECTED]
 
 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of Bob Booth - UIUC
 Sent: Monday, December 19, 2005 7:30 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: manual tape drive
 
 On Mon, Dec 19, 2005 at 07:19:19AM -0800, Gill, Geoffrey L. wrote:
  I've set up TSM 5.3 in a test environment with an external 
 dat drive.
  Unfortunately since I've only worked with automated 
 libraries I don't 
  know the process for checking in a scratch tape to use with TSM. I 
  also haven't been able to find the solution in the manuals so I was 
  wondering if
 someone
  could help me out please? I guess I'm not sure why the 
 commands seem 
  to be readily available for automated libraries but not this case. 
  I've been
 able
  to label some tapes just fine.
 
 You don't check in tapes in a manual mode.  Pre-label your 
 tapes with dsmlabel or label libv.  TSM will ask for a 
 scratch tape to be loaded when it needs one, and you just 
 plop it in to the requested drive.  Once the label is read, 
 it is registered to the storage pool.  When the tape is 
 emptied it is deleted.
 
 hth,
 
 bob
 


AW: To all TSM users: questions about DIRMC usage

2005-11-24 Thread Salak Juraj
 
nice activity from you, Andy!

 
 1) I use DIRMC in my environment ...
 
a) to ensure that GUI restores will accurately show files
   available for restore (NO)
 
b) to ensure that directory backups do not go directly to
   tape (NO )
 
c) to ensure fast restores from disk (YES)
The experience was very slow restores and 
some tapes beeing read twice during restore
(well, I am not quite sure about the TSM version from then, it might 
have been 2001)
 
d) for other reasons (NO)
 
 
 3) I have other comments about DIRMC

a) it is important and good that you have designed and implemented 
DIRMC at all!

b) For my purposes I´d find following settins of DIRMC usefull:
either system-wide setting:

DoYourBestTryToAlwaysKeepDirectoriesInRandomAccessStoragePoolsForever=YES/NO
or an attribute of a primary random access storage pool:

DoYourBestTryToAlwaysKeepDirectoriesInThisStoragePoolForever=YES/NO

which would usually prevent directories from beeing migrated from 
random access storage pools to tapes to tape storage pools.

I would not await it to work 100%, e.g. 
MOVE DATA 
could remain unchanged, ignoring this new setting, no problem for me.

But it´s a nice2have only - you all in Tucson are doing a damned good 
job anyway!

Juraj






 
 4) TSM development can contact me if they have additional
questions about my answers (YES / NO)
 
 
 
 
 Thank you in advance,
 
 Andy
 
 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development Internal Notes 
 e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: 
 [EMAIL PROTECTED]
 
 IBM Tivoli Storage Manager support web page:
 http://www-306.ibm.com/software/sysmgmt/products/support/IBMTi
 voliStorageManager.html
 
 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.
 


AW: Backup question - Full and Incremental

2005-11-18 Thread Salak Juraj
Hi Eric!

There is no reason whatsoever not to move data i_n_t_o FROMSTGPOOL (which is 
YourTAPEPOOL).

For efficiency reasons, 
depending on your HW configuration and usage of your tape drives, 
following configuration may, (conditional!) , be of advantage:

- define device class RECLAIM as SEQuential, FILE
 with parameters ESTCAP and MOUNT set so that the resultion size of 
 (ESTCAP times MOUNTL)  is at least 10%, better 30%-100% the size of single 
tape.

- define disk storage pool e.g. DISKMGR2TAPE based on RECLAIM.

- for DISKMGR2TAPE 
  set next storage pool to YourTAPEPOOL.
  set LOWmig to zero and HIGHmig to some low value.

- use MOVE DATA or MOVE NODEDATA
FROMSTG=YourTAPEPOOL TOSTG=DISKMGR2TAPE 

 This will save you both mount points in tape library and tape mount 
 requestst. *
On the negative side, this will cost you some temporary disk space, disk access 
time, disk bus usage.

btw, you can use the very same DISKMGR2TAPE pool for reclamation purposes of 
YourTAPEPOOL,
simply by setting:
 UPDate STG YourTAPEPOOL RECLAIMSTGpool=DISKMGR2TAPE 
with the very same effect (*)

hope it helps
regards
Juraj




 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Jones, Eric J
 Gesendet: Dienstag, 15. November 2005 02:05
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Backup question - Full and Incremental
 
 Thanks for the answers on the full/incremental backups.
 I did some research on the move node but it looks like you 
 move from one pool to different pool(could be very confused). 
  I would like to move within the same pool but combine all 
 the backups from a node(20+ tapes)
 to 1 or 2 tapes within the same pool to help with restores.   Some of
 the machines have been backed up for years and the data is 
 spread across so many tapes.  Can the pools be the same 
 (FROMSTGPOOL and TOSTGPOOL)?
 Never used this command before and a little nervous.
 
 Thanks for all the help,
 Eric
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of John Black
 Sent: Monday, November 14, 2005 4:28 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Backup question - Full and Incremental
 
 Eric,
  To organize your data, the following command should do the 
 trick: *move nodedata; *when you have a chance, please 
 research. Additionally, yes, full
 (selective) backups are seems as files. And as such, the 
 normal copy group retention periods apply. I hope this 
 information was beneficial  --MG
 
 
  On 11/14/05, Jones, Eric J [EMAIL PROTECTED] wrote:
 
  Good Afternoon.
 
  I have a question on backups and how if you mix full backups and 
  incremental backups the data is stored.
 
  We are running TSM 5.2.2 on all our TSM clients(AIX, SUN, Windows
 2000,
  windows 2003), and the server is AIX 5.2 with TSM 5.2.2.
 
  We normally just run incremental for all our machines and over the
 years
  the data gets scattered every where(number of tapes and in 
 some cases
  30+) so restores can be very slow. We keep all our data for 
 90 days so
  with the incremental backups the files expire after 90 days if the
 file
  has been backed up(90 day policy/keep up to 90 backups). My question
 is
  if I want to do a full backup every 120 days just to better organize
 my
  data on less tapes(current data) does it effect the 
 incremental data 
  that is on tape in any way? Does TSM see the backups 
 including a full 
  backup as files and they expire after a given amount of time?
 
  From what I could find that is the case but I do not want 
 to mess up 
  years of data and the users were asking lots of questions on how it 
  might affect them. They would be happy with the possibility 
 of faster 
  restores.
 
  Another question came up, is there any way to have TSM 
 organize data 
  already on tape for a specific server so it's on a few tapes instead
 of
  spread across many tapes? Just seeing if we could better manage our 
  data.
 
 
  Thanks
 
  Eric Jones
 
 


AW: Backup question - Full and Incremental

2005-11-14 Thread Salak Juraj
Hi!

You can perform sometimes full/selective backup to achieve better restore speed.
Caution, this will always add a new file version to TSM, 
even for unchanged files, 
so check it against your backup management class definitions. 
Example: assuming you know your File a is changing once a week and you want 
to keep it for 70 days
ist is OK to allow for 10 backup versions.
But if you are goiing to perform selective backup of a daily then 
you have to allow for 70 versions.

This will consume some addition space in you TSM DB, 
this will cost you backup time/bandwidth/space on tsm tapes, 
this will slow-down your TSM server in general.

I AM doing selective for the very same reason 
but only for windows user profiles, once a month. It is helpfull for me.
Doing it for whole servers/pc´s would probably saturate my TSM.


 Another question came up, is there any way to have TSM 
 organize data already on tape for a specific server so it's 
 on a few tapes instead of spread across many tapes?  
Look for collocation which is an attribute of tape storage pools.

regards
juraj




 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Jones, Eric J
 Gesendet: Montag, 14. November 2005 21:20
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Backup question - Full and Incremental
 
 Good Afternoon.
 
 I have a question on backups and how if you mix full backups 
 and incremental backups the data is stored.
 
 We are running TSM 5.2.2 on all our TSM clients(AIX, SUN, 
 Windows 2000, windows 2003), and the server is AIX 5.2 with TSM 5.2.2.
 
 We normally just run incremental for all our machines and 
 over the years the data gets scattered every where(number of 
 tapes and in some cases
 30+) so restores can be very slow.  We keep all our data for 
 90 days so
 with the incremental backups the files expire after  90 days 
 if the file has been backed up(90 day policy/keep up to 90 
 backups).  My question is if I want to do a full backup every 
 120 days just to better organize my data on less 
 tapes(current data) does it effect the incremental data
 that is on tape in any way?Does TSM see the backups 
 including a full
 backup as files and they expire after a given amount of time?
 
 From what I could find that is the case but I do not want to 
 mess up years of data and the users were asking lots of 
 questions on how it might affect them.  They would be happy 
 with the possibility of faster restores.
 
 Another question came up, is there any way to have TSM 
 organize data already on tape for a specific server so it's 
 on a few tapes instead of spread across many tapes?  Just 
 seeing if we could better manage our data.
 
 
 Thanks
 
 Eric Jones
 


AW: Can TSM use LDAP for admin authentication?

2005-11-09 Thread Salak Juraj
Hi!

Assuming you will NOT backup your LDAP Servers with TSM wait for this support, 
it is not available yet.

Assuming you WILL  backup your LDAP Servers with TSM this is a bad idea: 
backups are fo restore: how can you restore a malfunctioning LDAP when you 
cannot log-in because of maflunctioning LDAP?

regards
Juraj

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Loren Cain
 Gesendet: Mittwoch, 09. November 2005 15:03
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Can TSM use LDAP for admin authentication?
 
 We are building a new TSM installation for a client and I have been
 
 asked if TSM can use LDAP to authenticate the admin userids. They
 
 don't want to have to maintain a separate userid/password mechanism
 
 just for the TSM servers if they can avoid it.
 
  
 
 I have never seen anything that leads me to believe this is possible,
 
 but I've also never seen anything that says it isn't. Unfortunately,
 
 searches for keywords like ldap in the list archives and 
 support site
 
 results in many, many hits on how to back up ldap, but not on how
 
 or whether to use it.
 
  
 
 Does anyone know if this can be done?  The only alternative I have so 
 
 far is some sort of scripted mechanism to regularly pull data 
 from ldap
 
 and update TSM.
 
  
 
 This is on TSM 5.2.3, on Solaris9.
 
  
 
 Loren Cain
 
 Digicon
 
  
 
 


AW: AW: Preschedulecmd not executed

2005-11-09 Thread Salak Juraj
tnx Andy, I thought I knew but I learned I knew a bit only
Juraj 

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Andrew Raibeck
 Gesendet: Dienstag, 08. November 2005 16:05
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: AW: Preschedulecmd not executed
 
 The *SCHEDULECMD commands are launched via the command 
 interpreter specified by the ComSpec environment variable, 
 which points to the cmd.exe file on Windows NT systems. 
 cmd.exe supports the launch of scripts and binary executables 
 (programs). Therefore, *SCHEDULECMD can be used to launch 
 scripts as well as executable modules.
 
 APAR IC40249 is suspect, as it is fixed in 5.2.4. In 
 particular, even if the specified command were invalid, I 
 would expect to see a message in the dsmsched.log file 
 indicating that it is attempting to launch the command.
 
 If that isn't the issue, then a thorough examination of the 
 TSM environment needs to be made to understand what is happening:
 
 - Server-side client option sets that might impact PRESCHEDULECMD
 
 - Definitions of schedules associated with this node, in 
 particular the OPTIONS setting
 
 - Firm verification of the client options file that is being 
 used by the scheduler (confirm that the scheduler is using 
 the options file you think it is using)
 
 - Verification of the client options file to ensure that 
 there are not any duplicate options that might be causing a conflict.
 
 - Try running dsmc query options *schedulecmd using the 
 same options file and node name that the scheduler is using, 
 to verify that TSM is seeing the option correctly.
 
 - Examine dsmerror.log for any related error messages.
 
 Regards,
 
 Andy
 
 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development Internal Notes 
 e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: 
 [EMAIL PROTECTED]
 
 IBM Tivoli Storage Manager support web page: 
 http://www-306.ibm.com/software/sysmgmt/products/support/IBMTi
 voliStorageManager.html
 
 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.
 
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-11-07
 11:13:07:
 
  Hi,
  
  PreschedCMD starts a program, not a plain file.
  
  The last line in OPT file should be:
  
  Preschedulecmd C:\WINNT\SYSTEM32\cmd.exe /C D:
  \Hyperion\Export\Scripts\EssExport_Schedule.bat
  
  Note, the TSM must be told which program is to be started, in my 
  environment the line would be:
  Preschedulecmd C:\prog\JPSoft\4nt.exe /C D:
  \Hyperion\Export\Scripts\EssExport_Schedule.bat
  
  and TSM cannot predict know what interpreter shall be used 
 unless you 
  tell him.
  
  
  Well, I´d still await an error messag in log files anyway..
  
  regards
  Juraj
  
  
  
   -Ursprüngliche Nachricht-
   Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
   Auftrag von Paul Van De Vijver
   Gesendet: Montag, 07. November 2005 14:31
   An: ADSM-L@VM.MARIST.EDU
   Betreff: Preschedulecmd not executed
   
   Hi,
   
   Can anybody tell me why the next preschedulecmd is not exectuted ?
   I have no idea what can be wrong.
   
   Backup finishes ok, but preschedulecmd has not been executed and 
   there is no trace at all from the preschedulecmd in the 
 dsmsched.log 
   or dsmerror.log
   
   The 'scheduler' service was restarted before the backup was 
   initiated
   
   Thanks for any info,
   
   Paul
   
   TSM client V5.2.3.0 (on Windows 2003) TSM Server V5;2.6.0 
 (on Z/OS 
   mainframe)
   
   Last line in DSM.OPT
   
   Preschedulecmd D:\Hyperion\Export\Scripts\EssExport_Schedule.bat
   
   Extraction dsmsched.log :
   
   06.11.2005 03:25:32 Next operation scheduled:
   06.11.2005 03:25:32
   
   06.11.2005 03:25:32 Schedule Name: @137273
   06.11.2005 03:25:32 Action:Incremental
   06.11.2005 03:25:32 Objects:
   06.11.2005 03:25:32 Options:
   06.11.2005 03:25:32 Server Window Start:   03:21:20 on 06.11.2005
   06.11.2005 03:25:32
   
   06.11.2005 03:25:32
   Executing scheduled command now.
   06.11.2005 03:25:32 --- SCHEDULEREC OBJECT BEGIN @137273
   06.11.2005 03:21:20
   06.11.2005 03:25:32 Incremental backup of volume 
 '\\xyzintraxyz\c$'
   06.11.2005 03:25:32 Incremental backup of volume 
 '\\xyzintraxyz\d$'
   06.11.2005 03:25:32 Incremental backup of volume 'SYSTEMSTATE'
   06.11.2005 03:25:32 Backup System State using shadow copy...
   06.11.2005 03:25:37 Backup System State: 'System Files'.
   
   etc
   
   
   
   
   The information contained in this communication is 
 confidential and 
   may be legally privileged. It is intended solely for the 
 use of the 
   individual or the entity to whom it is addressed and others 
   authorised to receive it. If you have received it by 
 mistake, please 
   let 

AW: Preschedulecmd not executed

2005-11-07 Thread Salak Juraj
Hi,

PreschedCMD starts a program, not a plain file.

The last line in OPT file should be:

Preschedulecmd C:\WINNT\SYSTEM32\cmd.exe /C 
D:\Hyperion\Export\Scripts\EssExport_Schedule.bat

Note, the TSM must be told which program is to be started,
in my environment the line would be:
Preschedulecmd C:\prog\JPSoft\4nt.exe /C 
D:\Hyperion\Export\Scripts\EssExport_Schedule.bat

and TSM cannot predict know what interpreter shall be used unless you tell him.


Well, I´d still await an error messag in log files anyway..

regards
Juraj

  

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Paul Van De Vijver
 Gesendet: Montag, 07. November 2005 14:31
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Preschedulecmd not executed
 
 Hi,
 
 Can anybody tell me why the next preschedulecmd is not exectuted ?
 I have no idea what can be wrong.
 
 Backup finishes ok, but preschedulecmd has not been executed 
 and there is no trace at all from the preschedulecmd in the 
 dsmsched.log or dsmerror.log
 
 The 'scheduler' service was restarted before the backup was initiated
 
 Thanks for any info,
 
 Paul
 
 TSM client V5.2.3.0 (on Windows 2003)
 TSM Server V5;2.6.0 (on Z/OS mainframe)
 
 Last line in DSM.OPT
 
 Preschedulecmd D:\Hyperion\Export\Scripts\EssExport_Schedule.bat
 
 Extraction dsmsched.log :
 
 06.11.2005 03:25:32 Next operation scheduled:
 06.11.2005 03:25:32
 
 06.11.2005 03:25:32 Schedule Name: @137273
 06.11.2005 03:25:32 Action:Incremental
 06.11.2005 03:25:32 Objects:
 06.11.2005 03:25:32 Options:
 06.11.2005 03:25:32 Server Window Start:   03:21:20 on 06.11.2005
 06.11.2005 03:25:32
 
 06.11.2005 03:25:32
 Executing scheduled command now.
 06.11.2005 03:25:32 --- SCHEDULEREC OBJECT BEGIN @137273 
 06.11.2005 03:21:20
 06.11.2005 03:25:32 Incremental backup of volume '\\xyzintraxyz\c$'
 06.11.2005 03:25:32 Incremental backup of volume '\\xyzintraxyz\d$'
 06.11.2005 03:25:32 Incremental backup of volume 'SYSTEMSTATE'
 06.11.2005 03:25:32 Backup System State using shadow copy...
 06.11.2005 03:25:37 Backup System State: 'System Files'.
 
 etc
 
 
 
 
 The information contained in this communication is 
 confidential and may be legally privileged. It is intended 
 solely for the use of the individual or the entity to whom it 
 is addressed and others authorised to receive it. If you have 
 received it by mistake, please let the sender know by e-mail 
 reply and delete it from your system.
 If you are not the intended recipient you are hereby notified 
 that any disclosure, copying, distribution or taking any 
 action in reliance of the contents of this information is 
 strictly prohibited and may be unlawful.
 Honda Europe NV is neither liable for the proper and complete 
 transmission of the information contained in this 
 communication nor for any delay in its receipt.
 
 


Disk-TapeLibrary-Tape strategy seemingly simple Collocation question

2005-10-28 Thread Salak Juraj
Hi all!

I run following storage pool strategy (simplified):

A) Backup to disk storage pool large enough to keep 2-3 day´s backup

B) next storage pool is tape Library counting 60 slots, large enough to keep 
very all backup data.
Storage pool has collocation set to filesystem.
There are more filesystems than tape slots so more filesystems are sharing same 
tape(s).

This worked like a breeze. Recently, I added:

C) next storage pool to (B). 
This  is meant as overflow pool, built upon manual (scsi) library, 
cheap  and slow but large enough to keep anything I could potentially need.
It should only be used in exceptional situations, like (B) library overflow or 
(B) malfunction.

The idea:
In fact, this is an absolutely zero-cost replacement for high-quality tape 
library support contract,
as ist shares the very same tape drives with backup storage pools.
It makes possible for TSM to remain almost fully functional for some days
in case of tape library problem - only 
at costs of casual tape intervention and some spare tapes 
plus it is overflow area just for the case.


Problem:

From now on, backups of file systems tend to occupy new tapes on (C) as well 
although the (B) is full well below migration levels. 


Can I prevent TSM from doing it except by turning collocation at (B) off?
The collocation setting on (C) cannot not solve the problem.
The TSM Server is Windows, 5.2.2.5 

best Regards
Juraj Salak, Austria, Ohlsdorf






 

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Peter Hitchman
 Gesendet: Freitag, 28. Oktober 2005 13:29
 An: ADSM-L@VM.MARIST.EDU
 Betreff: query session command - for a specific session
 
 Hi,
 Running Version 5, Release 2, Level 3.4.
 Something that has been bugging me for a while. The query 
 session command shows all of the tsm sessions and according 
 to the help you can narrow this down to a specific session 
 using query session 'sesnum'. But I can never get this to 
 work. The output I see looks like the example below and the 
 session number column format is not what it looks like in the docs:-
 
   Sess Comm.  Sess Wait   Bytes   
 Bytes Sess  Platform Client Name
  Number Method StateTimeSent   
 Recvd Type   
 --- -- -- -- --- 
 --- -  ---
 129,541 Tcp/Ip MediaW 4.1 M1.4 K   
 1.0 K Node  TDP Ora- x
 
 How can I run a detailed query command for just this session?
 
 Regards
 
 Pete
 
 


AW: weird request

2005-07-20 Thread Salak Juraj
Hi Joe,

Should you go agree wit this request  and do not backup those older files,
you would be told guilty once you would not be able recover them.

I find it is your duty to tell the custommer this requirement is 
dangerous, if he really does not want to back it up so he does not need this 
files and he shall delete them (hi, Andy! :=) )

I´d do following: 
1st: explain it in a very kind way to the custommer
2nd: offer him huge rebate for the data amount beeing currently occupied by 
those old files.
(If this amount is quite big you can move data off the expensive tape library 
to an inexpensive tape to keep your costs very low.)

This way you are on the safe side and the custommer is convicted you are on his 
side, determined to solve his problems.

regards
juraj



 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Joe Crnjanski
 Gesendet: Freitag, 20. Mai 2005 19:31
 An: ADSM-L@VM.MARIST.EDU
 Betreff: weird request
 
 Hi All,
 
 I received a weird request from potential customer. Because 
 we will charge him per space occupied on our server, he wants 
 to backup only files less than a year old. He doesn't want to 
 go into each folder and try to find files that meet the 
 criteria. I know that Arcserve has that filter option.
 
 I don't think that we can do it with TSM, but before give him 
 an answer I thought I would check opinion of The List.
 
 Thanks in advance,
 
 
 Joe Crnjanski
 Infinity Network Solutions Inc.
 Phone: 416-235-0931 x26
 Fax: 416-235-0265
 Web: www.infinitynetwork.com
  
 


AW: Help me convincing IBM please!!!!

2005-04-08 Thread Salak Juraj
Here you go!

it is 2 CPU IBM 345, 2 years old,
database split in 4 database volumes over two raid-1 IBM-scsi arrays each, 
10kRPM.
Both disks share half of 2GB logfile (in normal mode) as well,
disk one shares operating system W2K as well and disk 2 shares one seldom used 
storage pool as well.
This might explain the big differencie among single results.

Regards
juraj


CT_UTILIZED AVAIL_SPACE_MB
--- --
   83.7  15000


ACTIVITY DateObjects Examined Up/Hr
-- -- -
EXPIRATION 2003-10-13   1875600
EXPIRATION 2003-10-16   4474800
EXPIRATION 2003-10-20860400
EXPIRATION 2003-10-24   1105200
EXPIRATION 2003-10-27730800
EXPIRATION 2003-10-28   1612800
EXPIRATION 2003-10-28   4813200
EXPIRATION 2003-10-29   1681200
EXPIRATION 2003-10-30   1692000
EXPIRATION 2003-10-30   5090400
EXPIRATION 2003-10-31   1695600
EXPIRATION 2003-10-31   3906000
EXPIRATION 2003-11-03   1299600
EXPIRATION 2003-11-04   2239200
EXPIRATION 2003-11-05   1670400
EXPIRATION 2003-11-06   1638000
EXPIRATION 2003-11-07   1584000
EXPIRATION 2003-11-10   1274400
EXPIRATION 2003-11-10   3862800
EXPIRATION 2003-11-11   1706400
EXPIRATION 2003-11-12   1411200
EXPIRATION 2003-11-12   3672000
EXPIRATION 2003-11-13   1522800
EXPIRATION 2003-11-14   1580400
EXPIRATION 2003-11-17   1123200
EXPIRATION 2003-11-17   252
EXPIRATION 2003-11-18   1602000




 -Ursprngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Loon, E.J. van - SPLXM
 Gesendet: Donnerstag, 07. April 2005 11:57
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Help me convincing IBM please
 
 Hi *SM-ers!
 Please help me with this and take the time to read this request
 
 We're having serious performance problems with our TSM 
 database. Part of the discussion with IBM is how expiration 
 should perform.
 A technical document on the TSM website stated that one 
 should be able to expire 3.8 million objects per hour. As 
 soon as I referred to this document (How to determine when 
 disk tuning is needed) IBM removed it from their site
 Expiration runs at a speed of 140.000 objects per hour in my 
 shop an I know for sure that some of you are running at 3.8 
 million or more and I like to prove this to IBM.
 
 I'm begging for your help with this. I would like to ask you 
 to issue the following SQL statements on your TSM server(s) 
 and send the output directly to me: [EMAIL PROTECTED]:
 
 select activity, cast ((end_time) as date) as Date ,(examined/cast
 ((end_time-start_time) seconds as decimal(18,13)) *3600) 
 Objects Examined Up/Hr from summary where 
 activity='EXPIRATION' and days (end_time) -days(start_time)=0
 
 select pct_utilized, avail_space_mb from db
 
 The first statement calculates your expiration performance 
 (objects/hour) and the second one lists your database size 
 and utilization.
 Thank you VERY much for sending me your statistics in 
 advance, I REALLY appreciate it!!!
 Kindest regards,
 Eric van Loon
 KLM Royal Dutch Airlines
 
 
 **
 For information, services and offers, please visit our web 
 site: http://www.klm.com. This e-mail and any attachment may 
 contain confidential and privileged material intended for the 
 addressee only. If you are not the addressee, you are 
 notified that no part of the e-mail or any attachment may be 
 disclosed, copied or distributed, and that any other action 
 related to this e-mail or attachment is strictly prohibited, 
 and may be unlawful. If you have received this e-mail by 
 error, please notify the sender immediately by return e-mail, 
 and delete this message. Koninklijke Luchtvaart Maatschappij 
 NV (KLM), its subsidiaries and/or its employees shall not be 
 liable for the incorrect or incomplete 

Re: file retention

2005-02-05 Thread Salak Juraj
another 2 cents:
using common backup methods, you will not be able to keep files created and 
deleted during short time (less then one day).
 
If the concern is serious, you might thing of not saving original files (like 
word), but of saving printots (in electronical form).
One approach could be a application traversion your machines and printing all 
new/changed files to PDF, 
TSM would only ARCHIVE -DELETE those PDF files.
More sofisticated - assuming only documents ever printed or e-mailed would have 
to be kept,
you could intercept your printing and mailing and faxing subsytems and save the 
print/mail/fax jobs in a common (PDF, RTF, ..) format forever.
 
Whatever you´l do, it will be accompanied by high costs and other problems your 
management never thought about before having asked for this functionality. 
(e.g. about access rights to the saved all documents forever)
 
regards
Juraj
 



From: ADSM: Dist Stor Manager on behalf of Tim Piqueur
Sent: Thu 03/02/2005 16:44
To: ADSM-L@VM.MARIST.EDU
Subject: file retention



Hi,

I am relatively new to TSM so this might seem a stupid question...
Our management requires us (for IP reasons) to have a copy of every file
that has ever been created in our company. Of course, only user files
(e.g. excel, word, ppt, etc) ; we do not have to include database
backups etc.
Now I could set RETONLY to NOLIMIT but this would mean a dramatic
increase of storage requirements.
To limit the size of the storage pools I was wondering if there would be
any method to copy the files that have been deleted to tapes that we can
keep offsite. After which of course, those deleted files should
expire...

Any ideas?

Thanks!

Tim


AW: DB2 offline backups

2005-02-02 Thread Salak Juraj
while not beeing DB2 expert in any way I have my 2 cents:

assuming both scenarii perform full backup,
I suspect online backup backups only used database space (which size you did 
not mention)
while offline backup, obviously, backups up the whole - reserved space,
and you maybe do not use compression on the tsm client, and maybe not even on 
tapes.
Just a guess, go give it a check

Juraj


 -Ursprngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Muthyam Reddy
 Gesendet: Mittwoch, 02. Februar 2005 19:28
 An: ADSM-L@VM.MARIST.EDU
 Betreff: DB2 offline backups
 Wichtigkeit: Hoch
 
 ** High Priority **
 
 Hi,
 We have DB2 database of size 530GB and it takes 3hrs time for 
 online backup with 4 tsm drives. When tried with same TSM/DB2 
 parameters and drives for offline backup it taking 7hrs. I 
 know it should take far less time for offline backups compare 
 to online.  I appreciate if any one can share their ideas to 
 improve offline backup window time.
 Meanwhile I will search docs for anything I have to change.
 
 thanks and regards
 muthym
 
 ++
 ++
 This electronic mail transmission contains information from 
 Joy Mining Machinery which is confidential, and is intended 
 only for the use of the proper addressee.
 If you are not the intended recipient, please notify us 
 immediately at the return address on this transmission, or by 
 telephone at (724) 779-4500, and delete this message and any 
 attachments from your system.  Unauthorized use, copying, 
 disclosing, distributing, or taking any action in reliance on 
 the contents of this transmission is strictly prohibited and 
 may be unlawful.
 ++
 ++
 privacy
 


AW: Using Windows Preinstallation Environment (Windows PE)

2005-01-21 Thread Salak Juraj
AFAIK, its only for corporate microsaft customers with some sort of volume 
license agreement, like SELECT.
But try to search after 
BartPE Builder, it is supposed to create bootable windows CD from comon 
original installation CD.

regards
Juraj Salak

 -Ursprngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Luc Beaudoin
 Gesendet: Montag, 17. Jnner 2005 21:31
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Using Windows Preinstallation Environment (Windows PE)
 
 Hi all
 
 Is there anyone out there that got the LINK to download the 
 Windows PE ...
 
 I want to use it for my Windows servers ...
 
 thanks again
 
 Luc
 


AW: Using Windows Preinstallation Environment (Windows PE)

2005-01-21 Thread Salak Juraj
Ive just got it, its here: http://nu2.nu/pebuilder/ 

 -Ursprngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Luc Beaudoin
 Gesendet: Montag, 17. Jnner 2005 21:31
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Using Windows Preinstallation Environment (Windows PE)
 
 Hi all
 
 Is there anyone out there that got the LINK to download the 
 Windows PE ...
 
 I want to use it for my Windows servers ...
 
 thanks again
 
 Luc
 


AW: Creating a copy storage pool with a different retention time

2005-01-05 Thread Salak Juraj
hallo,

Retention is NOT bound to a storage pool, it is bound to  backuparchive copy 
groups.
Also there is no copy functionality for data from a storage pool to another 
storage pool.
As you see, your question is worder with less-then perfect precision,
so no precious answer is possible.

best regards
Juraj




 -Ursprngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Weinstein, Stephen
 Gesendet: Dienstag, 04. Jnner 2005 17:16
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Creating a copy storage pool with a different retention time
 
 Currently I have my policy set to retain file backups for 1 
 year.  First I do my backups to my disk storage pools, I then 
 have 2 copy pools,  I copy the data to a non-collocated copy 
 pool I send off site for 1 year, and then make a second copy 
 which I keep onsite  as a spare copy of my onsite-pool that 
 gets migrated from the disk pool.  What I would like to do is 
 create another copy storage pool with a different retention, just
 14 days.  How would I do this, or the better question is is 
 possible to do??
 
 
 **
 This email and any files transmitted with it are confidential 
 and intended solely for the use of the individual or entity 
 to whom they are addressed. If you have received this email 
 in error please notify the system manager at postmaster at 
 dor.state.ma.us.
 **
 


AW: Sub-File Backup - 2 GB Limit

2004-12-29 Thread Salak Juraj
Hi,

maybe a year ago I asked the same question and of of TSM programmers responded 
that this vere not a complicated technical issue but the developers would need 
an official requirement for it in order to do it.
I was too lazy and / or busy to open an official requirement.

best regards
Juraj Salak
 

 -Ursprngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Rushforth, Tim
 Gesendet: Montag, 20. Dezember 2004 18:24
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Sub-File Backup - 2 GB Limit
 
 Hi:
 
  
 
 Just curious if anyone knows if this limit has changed or 
 will change in the future?
 
 We have some users asking about it.
 
  
 
 Thanks,
 
  
 
 Tim Rushforth
 
 City of Winnipeg
 
  
 
  
 
 Forum:   ADSM.ORG - ADSM / TSM Mailing List Archive 
  Date:  2003, Jan 06 
  From:  Jim Smith nobody at nowhere.com
 mailto:nobody%20at%20nowhere.com  
 
 Tim,
  
 Actually, two different behaviors based on two-different problems.
 Files
 that start less then 2 GB but grow  2 GB will continue to 
 use subfile backup as long as the other requirements for the 
 base file (i.e., the file on the client cache) is still 
 valid.  The limiting factor here is that there is only 32-bit 
 support in the differencing subsystem that we are using.  We 
 chose 2 GB on the onset (instead of 4 GB) as the limit to 
 avoid any possible boundary problems near the 32-bit 
 addressing limit and also because this technology was aimed 
 at the mobile market (read: who is going to have files on 
 their laptops  2 GB).  I understand that there are several 
 shops that use this technology beyond the laptop environment.
 Ultimately, the solution is to have a 64-bit subsystem in 
 place so that we can go beyond 4 GB.  I suggest a requirement 
 to Tivoli if this is important to your shop.
  
 The low-end limit (1024 bytes) was due to some strange 
 behavior with really small files, e.g., if a file started out 
 at 5 k and then was truncated to 8 bytes.  The solution was 
 to just send the entire file if the file fell below the 1k 
 threshold.  We can get away with resending these small files 
 because ... they are small files!  It is probably a wash to 
 resend or to try to correctly send a delta file in this case.
  
 Hope this helps.
  
 - Jim
  
 J.P. (Jim) Smith
 TSM Client Development
 smithjp AT us.ibm DOT com mailto:smithjp%20AT%20us.ibm%20DOT%20com 
 
  
 


Re: use of preallocated files in disk stgpool using devtype of fi le

2004-12-11 Thread Salak Juraj
Tim wrote: (If you don't pre-allocate the volumes they continually grow
resulting in a lot of file system fragments). 

Certainly true.

If one could change default fragment size for file expansion, 
which seems to be set to only few kBytes on NTFS, 
to maybe hundreds of MBytes, this could help much.

I could do it 25 years ago, as file system setting 
on 2 of 3 available loadable file-systems on 16-bit PDP-11, which was 
machine runningf RSX (multi-user multi-tasking OS requiring almost 1/2 MB RAM)

Is there a way to change this behaviour under current,
newer than  NewTechnology operating systems?

AFAIK,  programmers may change this value on opened NTFS file basis,
maybe is somebody from IBM/Tivoli hearing. 
I played with those values on PDP and it did have 
huge impact on operational speed.

best regards

Juraj Salak, Austria

 

 





I did a test of predefining volumes with dsmfmt (TSM 5.2.2.4 on NTSF file
system on Windows 2003).  If the volume is not completely full, the size of
the file on disk changes from 20GB (say) to the amount of data used.  As
more data is migrated later the volume was appended to.

I wanted to test pre-allocating the volumes with dsmfmt (as opposed to just
defining them to the storage pool) to try to eliminate file system
fragmentation.  (If you don't pre-allocate the volumes they continually grow
resulting in a lot of file system fragments).

But the result I saw above made me wonder if it was worthwhile.  IE after a
while all of these pre-allocated volumes would end up in fragments.

But I may be missing something here   Has anybody else looked into this?

Thanks,

Tim Rushforth
City of Winnipeg

-Original Message-
From: Steve Bennett [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 09, 2004 5:34 PM
To: [EMAIL PROTECTED]
Subject: use of preallocated files in disk stgpool using devtype of file

We are supplementing our existing ATL with a 6TB SATA. Clients will
continue to backup directly to the TSM server's local SCSI disk which
will get migrated to the SATA stgpool which will migrate to the ATL.

Since our W2K TSM server is limited to 2TB file systems we will be
allocating 3 filesystems for the 6TB of space. Because of the single
path issue when using dynamically allocated scratch volumes in the SATA
pool I intend to define the pool with maxscratch of 0 and preallocate
all the the vols with the dsmfmt command and then define all the vols to
the SATA pool. So far so good.

In the case of dynamically allocated vols TSM allocates and then
increments the size of the vol as needed up to the max size specified.
When no longer needed the vol is then deleted so the space can be reused.

When using predefined 20GB vols will TSM append to the end of the vol if
it is not completely full just as it does for tape vols or does it go to
the next available volume in scratch status? I suspect and hope the
answer is the latter but what's the real answer?

TIA

--

Steve Bennett, (907) 465-5783
State of Alaska, Enterprise Technology Services, Technical Services Section


AW: Remote Backups....

2004-11-24 Thread Salak Juraj
perfect!

in addition to this points, do make some planning  tests for restore.
Basically, your problem is full restore.
Either you can afford to wait long enoung to restore over remote line,
(learning about  restart restore may be important)
or produce backup sets and send them per post to the remote location.

best regards
Juraj

 -Ursprngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Joe Crnjanski
 Gesendet: Mittwoch, 24. November 2004 17:25
 An: [EMAIL PROTECTED]
 Betreff: Re: Remote Backups
 
 -Use compression
 
 -Use sub-file backup. (doesn't work on files larger than 2GB; 
 otherwise enormous improvements)
 
 -Encryption doesn't hurt if you are moving the data over 
 public network
 (Internet)- will be improved in TSM 5.3
 
 -Don't backup system object every day (around 200MB-300MB). 
 Make additional schedule for system object and C drive (maybe 
 on weekends)
 
 -Choose carefully what you need to backup (include/exclude)
 
 Regards,
 Joe C.
 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of Michael, Monte
 Sent: Tuesday, November 23, 2004 3:32 PM
 To: [EMAIL PROTECTED]
 Subject: Remote Backups
 
 Fellow TSM administrators:
 
 My company is currently looking at backing up approximately 
 40 NT servers at our remote locations, back to our local data 
 center.  Each location has around 10gb -  40gb of storage, 
 and very minimal daily change activity on the files.  Some of 
 the locations are 256k data lines, and some are t1 lines.
 
 Does anyone have a list of best practices?  What are some of 
 the options that you have found to improve the process of 
 remote backups via TSM to a central location.  Any help and 
 input that you can provide is much appreciated.
 
  Thank You,
  
  Monte Michael
 
 
 This communication is for use by the intended recipient and 
 contains information that may be privileged, confidential or 
 copyrighted under applicable law.  If you are not the 
 intended recipient, you are hereby formally notified that any 
 use, copying or distribution of this e-mail, in whole or in 
 part, is strictly prohibited.  Please notify the sender by 
 return e-mail and delete this e-mail from your system.  
 Unless explicitly and conspicuously designated as E-Contract 
 Intended, this e-mail does not constitute a contract offer, 
 a contract amendment, or an acceptance of a contract offer.  
 This e-mail does not constitute a consent to the use of 
 sender's contact information for direct marketing purposes or 
 for transfers of data to third parties.
 
  Francais Deutsch Italiano  Espanol  Portugues  Japanese  
 Chinese Korean
 
 http://www.DuPont.com/corp/email_disclaimer.html
 


AW: /usr/tivoli contents disappear ?

2004-09-30 Thread Salak Juraj
Hi,

even thought I am on Windows 2000,
the  +ACI-feature+ACI- I can see here
might be caused by similar tsm code: (w2K Server with tsm client 5.2.2:)

+ACI-tsm+ACI- directory: ALL user access rights for ALL users get occassionaly lost.
(ACL+ALQ-s remain there, but all access rights are set to +ADw-empty+AD4-)

If it happens at all then it happens
shortly(?) or immediately(?)  after the end of either a scheduled backup or
after a check for schedule.
And no, there is no postschedule.cmd defined.

I only saw it on 2 servers and only occasionaly, it remains dubious,
I have neither an explanation nor a solutions, only workarounds.

While there  are differencies to your problem, the common thing is
that tsm directory has dubious problems while the rest of the file system
has none.

???

regards
Juraj salak



-Urspr+APw-ngliche Nachricht-
Von: Richard Mochnaczewski +AFs-mailto:RichardM+AEA-INVERA.COM+AF0-
Gesendet: Montag, 27. September 2004 15:15
An: ADSM-L+AEA-VM.MARIST.EDU
Betreff: /usr/tivoli contents disappear ?


Hi,

Has anyone encountered an issue where an AIX server is rebooted and the
contents of /usr/tivoli go missing ? I've had it happen twice on two
separate servers within the span of a month.  Is this a known issue or does
someone just not like me very much ?

Rich


+ACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACo-
This e-mail may be privileged and/or confidential, and the sender does not
waive any related rights and obligations. Any distribution, use or copying
of this e-mail or the information it contains by other than an intended
recipient is unauthorized. If you received this e-mail in error, please
advise me (by return e-mail or otherwise) immediately.

Ce courriel est confidentiel et prot+AOk-g+AOk-. L'exp+AOk-diteur ne renonce pas aux
droits et obligations qui s'y rapportent. Toute diffusion, utilisation ou
copie de ce message ou des renseignements qu'il contient par une personne
autre que le (les) destinataire(s) d+AOk-sign+AOk-(s) est interdite. Si vous recevez
ce courriel par erreur, veuillez m'en aviser imm+AOk-diatement, par retour de
courriel ou par un autre moyen.


AW: how to speed up restores?

2004-09-24 Thread Salak Juraj
hi,
besides other helpfull answers,
you may want to check how quickly your client can create small files.
Write a script which creates  974008 files in a directory tree in temp
directory and watch.
This can be a bottleneck as well.

If neither this nor network is bottleneck,
then tape seeks will probably be.
In this case you could plan $$ for a disk based primary backup storage pool
;-)
at least for small files (you can have small files landing on disk STG 
while large ones  bypassing it and going to tape)

regards
Juraj


-Ursprüngliche Nachricht-
Von: Troy Frank [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 24. September 2004 20:30
An: [EMAIL PROTECTED]
Betreff: Re: how to speed up restores?


I have had to restore ~52GB of data several times now, and it generally
takes around 6 hours over Gigabit ethernet.  It's around 800,000 files,
most of them very small.  Besides the small files, it also has the
problem of being a high-turnover group of files.  So the restore is
spread out over a lot of tapes.  I would think (hope) that this
represents pretty close to a worst-case scenario.


Troy Frank
Network Services
University of Wisconsin Medical Foundation
608.829.5384

 [EMAIL PROTECTED] 9/24/2004 11:45:15 AM 
Something came up the other day talking about one of our large intel
fileservers (windows 2003) on how long it might take to restore its
shared drive should the entire drive be lost. Today I used the client
TSM gui and requested an estimate of file sizes and times to restore
this intel drive. The gui came back with the estimate of 125 hours
to restore 260GB of data contained in 974008 objects.

How good are these estimates and how can I get this time down from
0.0346GB/minute to 1.0GB/minute (260 minutes to restore 260GB of
data)? The network is 100Mb/sec, so that's 450MB/hour. That's seems
to be 591.8MB/hour. The reports do not show that compression is
turned on for any (not this) clients.

How does this work out?

Mike

Confidentiality Notice follows:

The information in this message (and the documents attached to it, if any)
is confidential and may be legally privileged. It is intended solely for
the addressee. Access to this message by anyone else is unauthorized. If
you are not the intended recipient, any disclosure, copying, distribution
or any action taken, or omitted to be taken in reliance on it is
prohibited and may be unlawful. If you have received this message in
error, please delete all electronic copies of this message (and the
documents attached to it, if any), destroy any hard copies you may have
created and notify me immediately by replying to this email. Thank you.


AW: How to prune all those other Win log files?

2004-08-13 Thread Salak Juraj
go for a tail under windows.
For zero cost solution you may serach for cygwin.
For prefessional / expensive you may search for MKS.
There are few others as well.

Both they include csh shell as well.

I prefer windows-compatible shell 4nt from jpsoft, which has tail as
internal command.

hope this helps
regards
juraj
 

-Ursprüngliche Nachricht-
Von: T. Lists [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 12. August 2004 23:11
An: [EMAIL PROTECTED]
Betreff: OT: How to prune all those other Win log files?


This is a Windows question, not a TSM question ...

How can you prune a log file (not dsmsched.log or
dsmerror.log) to keep it to a manageable size?  We've
got a lot of other log files generated by the TDP
software on the clients, and they are growing quite
large.

Were this unix, I'd just tail out the last 100 or so
lines of the file and save that.  How can you
accomplish something similar with Windows/MSDos?



__
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail


AW: TSM and Active Directory

2004-07-16 Thread Salak Juraj
the only point with AD I see is 
once you change your TSM server (name and IP)
you do not have to edit your dsm.opt files to point to the new tsm server

regards
Juraj salak


-Ursprüngliche Nachricht-
Von: Coats, Jack [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 15. Juli 2004 18:34
An: [EMAIL PROTECTED]
Betreff: TSM and Active Directory


Ok I havn't researched it well, but does someone have a thumbnail of
dantages/disadvantages of having TSM advertise itself via Active Directory?
Currently we are a totally Winders (almost - a few AIX machines too)
environment.

Even a nice RTFM reference is appreciated.

TIA ... Jack


AW: slowards running windows 2000 client backup

2004-07-13 Thread Salak Juraj
Does TSM Journal service run flawelessly on the slow machine on all
filesystems backed-up?
regards
juraj

-Ursprüngliche Nachricht-
Von: Levi, Ralph [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 12. Juli 2004 19:17
An: [EMAIL PROTECTED]
Betreff: slow running windows 2000 client backup


I have a windows 2000 clients (v5 sp 4 build 2195) running tsm 5.1.7
who's incremental backup never ends.  Using the timestamps next to
message:  ANS1898I , it is taking between 5 minutes and 1 hour to parse
through 500 files.  I have 100+ other servers that take about 2/100 of a
second to pass through 500 files.

Tried rebooting the client and ran performance monitor.  Neither helped
or showed any problems.

Here is the bad server:

07/11/2004 14:23:08 ANS1898I * Processed   194,500 files *
07/11/2004 14:25:17 ANS1898I * Processed   195,000 files *
07/11/2004 14:34:18 ANS1898I * Processed   195,500 files *
07/11/2004 15:16:52 ANS1898I * Processed   196,000 files *

Here is a typical server:

07/04/2004 21:45:02 ANS1898I * Processed   649,000 files *
07/04/2004 21:45:05 ANS1898I * Processed   649,500 files *
07/04/2004 21:45:08 ANS1898I * Processed   650,000 files *
07/04/2004 21:45:10 ANS1898I * Processed   650,500 files *

Does anyone have any ideas?  The usual questions have been asked (what
changed?) but we still come up blank.

Thanks,
Ralph


AW: Canceling processes

2004-06-04 Thread Salak Juraj
I agree

Juraj

-Ursprüngliche Nachricht-
Von: Tom Kauffman [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 04. Juni 2004 20:41
An: [EMAIL PROTECTED]
Betreff: Re: Canceling processes


Yup. Why is it that TSM can clean up/roll back the database after shutting
the server down for this, but doesn't seem to be able to support the same
level of cleanup to allow a 'force' on the cancel command?

I've had to bounce the server quite a bit back when a DLT drive would start
throwing write errors part way into a 9.6 GB file.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: Roger Deschner [mailto:[EMAIL PROTECTED]
Sent: Friday, June 04, 2004 12:32 PM
To: [EMAIL PROTECTED]
Subject: Re: Canceling processes

You've missed the point. He's in a Deadly Embrace situation because the
work unit or whatever can never complete due to a hardware issue. I've
had this happen too. The only way to end the process is to restart the
whole TSM server. What if the CEO is halfway through restoring his hard
drive at the time? We need a force parameter on the cancel command.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Thu, 3 Jun 2004, Coats, Jack wrote:

My experience tells me that when you cancel either a client session or a
reclimation or migration process, that TSM flags the process for cancel.
Then whenever the process completes a work unit (transfering a file or
glob of files, or whatever) the process itself checks to see if it is being
canceled.  If it has its cancel pending flag set, then the process finishes
cleaning up and terminating, otherwise it continues on its marry way.

I have found if a large file is being transfered, it can take over an hour
for a process to terminate.

... Just my expeiences, and my thought process of how things work in 'my
world' :) ... JC

 -Original Message-
 From: Rob Hefty [SMTP:[EMAIL PROTECTED]
 Sent: Thursday, June 03, 2004 9:58 AM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Canceling processes

 First end reclamation with the update stgpool  reclamation=100,
 then cancel any active reclamation processes with the cancel process
 XXX command and dismount any idle volumes with the dismount vol
 XX.  Then I would verify that the tapes reclamation needs are all
 in the library and are available (q vol XX f=d -OR- q vol
 acc=unavail).

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Moses Show
 Sent: Thursday, June 03, 2004 9:50 AM
 To: [EMAIL PROTECTED]
 Subject: Canceling processes

 Hi everybody,
 Seem to have a bit of a problem in that I have a couple of space
 reclamation jobs running on a TSM server at present. However there seems
 to be an issue with our 3583 tape library as although media requested
 for mounting is physically in the library they won't seem to mount.
 Tried cancelling these processes with the cancel command but after
 nearly 80 minutes the processes still haven't been cancelled. Was going
 to run a command to stur off space reclamation, but am unsure if this
 stops active jobs or just prevents new reclamation jobs from starting.
 could somone or some bodies help me out here please ?
 
 ==
 This communication, together with any attachments hereto or links
 contained herein, is for the sole use of the intended recipient(s) and
 may contain information that is confidential or legally protected. If
 you are not the intended recipient, you are hereby notified that any
 review, disclosure, copying, dissemination, distribution or use of this
 communication is STRICTLY PROHIBITED.  If you have received this
 communication in error, please notify the sender immediately by return
 e-mail message and delete the original and all copies of the
 communication, along with any attachments hereto or links herein, from
 your system.

 
 ==
 The St. Paul Travelers e-mail system tdmmsws2 made this annotation on
 06/03/2004, 10:46:25 AM.

CONFIDENTIALITY NOTICE:  This e-mail and any attachments are for the
exclusive and confidential use of the intended recipient.  If you are not
the intended recipient, please do not read, distribute or take action in
reliance upon this message. If you have received this in error, please
notify us immediately by return e-mail and promptly delete this message and
its attachments from your computer system. We do not waive attorney-client
or work product privilege by the transmission of this message.


AW: Cristie BMR solution and TSM

2004-06-03 Thread Salak Juraj
QUESTION:
While it is integrated with TSM, how would we go about with the
interface,
the GUI in TSM. Would there be some server side component added for
interfacing or is it inherent?

Cristie´s solution is client-side only, you will not install anything
aditional on the TSM server.

Juraj salak


AW: Small Sites

2004-05-19 Thread Salak Juraj
Hi,

as far as I understand you,
you only want to backup this site into an existing large TSM server
elsewhere.
I not, GOTO (2)

A small TSM Server per site, if necessarey at all,
is not practicable without both local administration and tape library.

I have few site like this (with the exception of Domino)
and the answer is : 
first: it works, assuming that amount of files changed daily
is significantly  less than WAN capacity in the backup window.
Second, read first again.
Third, compression and subfile backup can help much, but it depends
on data and you configuration.

Probaly backup of system object will not be practicable,
but backup of data files probably yes.

You will have to tune include/exclude options down,
you will have to tune network parameters,
compression/compressalways parameters (per directory, file type..)
subfilebackup
you will want to use  journal service.

FORGET full backup weekly, no one needs it. Read TSM concepts,
browse in this user forum.
What you/your users need is RESTORE and ONLY RESTORE,
no one in the world ever needed backup, not to say a full backup.

Consider you restore requirements,
if full restore is an issue, it will take long time through the WAN.
Maybe is backup set generation and physical transfer of the backup set along
with
technician and restore on-site required and/or practicable.

For backup of Domine, think about replication. This might cause less
bandwidth consumption
comparing to TSM backup.

(2) if you want an backup server (tsm or not) in each of this sites,
you have apparently got a problem with both budget and administration,
in fact.

this are only couple of points from more to be considered

regards
Juraj


-Ursprüngliche Nachricht-
Von: Douglas Currell [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 19. Mai 2004 09:19
An: [EMAIL PROTECTED]
Betreff: Small Sites


Has anyone deployed TSM in a very small site? When I say a small site I mean
something like this:


   4 or 5 workstations  2 or 3 servers
   very limited bandwidth available from WAN - assume 56k (yes)
   NO local administration
   NO budget for tape library
   TSM sharing a physical server with Domino, platform is Windows
   150 MB per user per week storage growth
   1 X 73 GB hd for use as storage pool/file device
   100Mb LAN
   access to large remote TSM servers
   12 hour backup window, assumption that WAN link is at 56k (yes!)
   backup of local workstations preferred but not required
   TSM clients would mostly be Windows XP/2003 but could include Linux, in
particular, and just about anything else.
   weekly full backup required to remote TSM server


Would this be a candidate for remote NAS backup? I am looking at TSM because
it allows for policy implentation, can accommodate clients on just about all
platforms and also because it allows backup to disk. This is not to say that
other backup/restore products could be used - Retrospect might be usable
too..

To me, this is an interesting exercise. Please share your thoughts..Thank
you





-
Post your free ad now! Yahoo! Canada Personals


Andy, the forum becomes blended

2004-05-11 Thread Salak Juraj
no major troubles,
no arguing about functionality issues

either is TSM vanishing from the earth
or the software performs very well,

which one ist true?

regards
Juraj


AW: permanent delete of an backuped-up object?

2004-04-21 Thread Salak Juraj
thnx,

and next, quick and dirty method is unsupported delete object (or so)
command,
described for example on splendid richard simms page.

Yesterday I must have deleted parts of my personal memory,
apparently as a result of discontinued coffee absorption,
luckily not a permanent issue :-)))

regards
juraj

-Ursprüngliche Nachricht-
Von: Thorneycroft, Doug [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 20. April 2004 22:43
An: [EMAIL PROTECTED]
Betreff: Re: permanent delete of an backuped-up object?


One method is to define a mgmt class with zero versions deleted,
use include/exclude to backup the object and rebind it to the new 
class, then either delete the object or exclude it from backup, and
it will be deleted on the next backup.




-Original Message-
From: Salak Juraj [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 20, 2004 9:40 AM
To: [EMAIL PROTECTED]
Subject: permanent delete of an backuped-up object?


hi all,

there has been a method how to permanently delete an already backed-up
object, canm someone give me a quick hint?

Juraj


permanent delete of an backuped-up object?

2004-04-20 Thread Salak Juraj
hi all,

there has been a method how to permanently delete an already backed-up
object, canm someone give me a quick hint?

Juraj


AW: Restoring data after deleting a volume

2004-04-05 Thread Salak Juraj
well, DELETE VOLUME is a secure way.. to loose your data.

dsmadmc HELP DELETE VOLUME
...
  If the volume being deleted is a primary storage pool volume, the
  server checks whether any copy storage pool has copies of files
  that are being deleted. When files stored in a primary storage
  pool volume are deleted, any copies of these files in copy storage
  pools are also deleted.
...
...


1) If you are NOW to restore files from this volume,
simply tell TSM the volume is destroyed:
 UPDATE VOLUME BlahBlah access=destroyed
and subsequent DSMC RESTORE will
retsore files from backup storage pool (assuming the volumes required are
not OFFSITE).
After restore use RESTORE VOLUME or RESTORE STG
to recreate you primary storage pool volumes.
Check the HELP command for more details.


2) Alternatively, if you are afraid not all files from the volume concerned
have redundant copy in a backup storage pool,
use AUDIT VOLUME first followed with RESTORE VOLUME or RESTORE STG,
a n d  with BACKUP STG,
then goto (1).


regards
juraj





-Ursprüngliche Nachricht-
Von: Mike Bantz [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 05. April 2004 16:57
An: [EMAIL PROTECTED]
Betreff: Restoring data after deleting a volume


So I've got a volume that got all sorts of messed up somehow, one that
resides in my tapepool. I've tried moving the data from it with no luck -
too many errors. Since I should have a copy of this in the copy pool, I
should be all set. So I'm going to delete the volume with discarddata=yes
and make sure I've got the data from the copypool mirrored back onto the
tapepool (onto a new tape somewhere).

Question is, how do I do that? As in syntax? I can't just be restore vol
00035-L1 or something that simple, right?

Thanks in advance...

Mike Bantz
Systems Administrator
Research Systems, Inc


AW: Drive in DOMAIN, but not backed up

2004-04-02 Thread Salak Juraj
I had such issue once,

my Q: drive had access rights set for administrator 
but no rights at all for user system.

Becuase the tsm scheduler run under system,
it could not see the drive at all so it neither
backed it up nor reported an error.

regards
Juraj


-Ursprüngliche Nachricht-
Von: Bill Boyer [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 02. April 2004 20:44
An: [EMAIL PROTECTED]
Betreff: Drive in DOMAIN, but not backed up


TSM Server 5.2.2.1 on Windows 2000
TSM Client 5.2.2.5 on Windows 2000

This client has a preschedule command to mount a filesystem. It's actually a
SNAPDRIVE on a Netapp appliance. The script runs CC=0 and is supposed to
mount the drive as Q:. The DSM.OPT file specifies DOMAIN Q:, but the last
incremental backup of that filesystem is listed as 3/8/04.

I thought that if you specified a domain and it wasn't available, you would
get an error on the backup?

There are no messages in the dsmerror.log file, and it's running with -QUIET
so there's not much in the DSMSCHED.LOG. I have changed it to run -VERBOSE
to see exactly what is being backed up.

Bill Boyer
Experience is a comb that nature gives us after we go bald. - ??


AW: Rebinding backups of deleted files.

2004-03-30 Thread Salak Juraj
1) restore,
2) backup (thus rebind)
3) delete

See it positive - as a recovery test :-)

regards
juraj

-Ursprüngliche Nachricht-
Von: Alan Davenport [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 30. März 2004 16:07
An: [EMAIL PROTECTED]
Betreff: Rebinding backups of deleted files.


Hello Group,

 Just when you think you know how something works...

 I recently discovered that a user backup directory was not following
standard naming conventions and for this reason I was keeping 1 year's worth
of backups rather than 45 days. Their application creates unique names for
each backup then deletes old backups. I added and include option for their
backup directory for the 45 day management class and ran a backup. The
existing files in the directory rebound correctly to the 45 day class
however the TSM backups of the deleted files are still bound to the default
management class and will not go away for a year. This is causing me to
retain many many gigabytes more data than necessary. Is there any way to get
those files rebound to the 45 day class?


 Thanks,
 Al

Alan Davenport
Senior Storage Administrator
Selective Insurance Co. of America
[EMAIL PROTECTED]
(973) 948-1306


AW: sanity check, incremental probelms?

2004-03-30 Thread Salak Juraj
try dsmc incr * -subdir=yes

-Ursprüngliche Nachricht-
Von: Warren, Matthew (Retail) [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 30. März 2004 16:45
An: [EMAIL PROTECTED]
Betreff: sanity check, incremental probelms?


Hallo TSM'ers,

Sun solaris, baclient V522

A mountpoint exists /app/sma/http

There are files and directories under this mountpoint.

When I;

Cd /app/sma
Dsmc
Dsmc inc ./* -subdir=y

I get a report that 2 objects were inspected and 0 backed up.

But if I

Cd /
Dsmc inc /app/sma/* -subdir=y

It backs up everything under http and onward...


I feel like I am missing something elementary...


Thanks for any help,

Matt.


___ Disclaimer Notice __
This message and any attachments are confidential and should only be read
by those to whom they are addressed. If you are not the intended recipient,
please contact us, delete the message from your computer and destroy any
copies. Any distribution or copying without our prior permission is
prohibited.

Internet communications are not always secure and therefore the Powergen 
Group does not accept legal responsibility for this message. The recipient
is responsible for verifying its authenticity before acting on the 
contents. Any views or opinions presented are solely those of the author 
and do not necessarily represent those of the Powergen Group.

Registered addresses:

Powergen UK plc, Westwood Way, Westwood Business Park,
Coventry CV4 8LG.
Registered in England  Wales No. 2366970

Powergen Retail Limited,  Westwood Way, Westwood Business Park,
Coventry CV4 8LG.
Registered in England and Wales No: 3407430

Telephone +44 (0) 2476 42 4000
Fax+44 (0) 2476 42 5432


AW: PERL question for dsmaccnt.log queries ?

2004-03-30 Thread Salak Juraj
I use following:

dsmadmc  Select sum(cast(bytes/1024/1024 as decimal(10,3))) as Backuped
MB from summary where start_timecurrent_timestamp - 1 day and
activity='BACKUP'

The above is 1 long line.

If runnig from within a script run dsmadmc with -ID -PASSW and -DATAONLY=YES
Parameters


regards
juraj


-Ursprüngliche Nachricht-
Von: Justin Case [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 30. März 2004 17:37
An: [EMAIL PROTECTED]
Betreff: PERL question for dsmaccnt.log queries ?


Hi all I am not a PERL programmer so I could use some help with a script
that would query the
TSM accounting log (dsmaccnt.log) to pull out the amount of data backed up
in a 24 hour period (last 24 hours).
If someone could share it with me that would be great. Or if someone knows
of a better method
to compute the daily amount of data backed up by TSM server too..

Thanks

Justin


Device mount deadlock?

2004-03-30 Thread Salak Juraj
Hello all,

I know TSM does recognise and solve deadlocks.
However, this seems like deadlock for me???

regards
Juraj

tsm: AOHBACKUP01q mount
ANR8333I FILE volume V:\TSM\RECLAIMPOOL1\1DF4.BFS is mounted R/W,
status: IN USE.
ANR8376I Mount point reserved in device class RECLAIM1, status: RESERVED.
ANR8376I Mount point reserved in device class LTOCLASS, status: RESERVED.
ANR8330I LTO volume AFW502L1 is mounted R/W in drive LTO1 (mt1.0.1.5),
status: IN USE.
ANR8329I DLT volume AOHB035 is mounted R/W in drive DLT3 (mt4.0.0.2),
status: IDLE.
ANR8329I DLT volume AOHB015 is mounted R/W in drive DLT2 (mt6.0.0.2),
status: IDLE.
ANR8329I DLT volume AOHB064 is mounted R/W in drive DLT1 (mt5.0.0.2),
status: IDLE.
ANR8334I 7 matches found.



tsm: AOHBACKUP01q proc

 Process Process Description  Status
  Number
 
-
 158 Space ReclamationVolume AFW508L1 (storage pool TPDB),
Moved Files:
   11806, Moved Bytes: 39,568,744,690,
Unreadable
   Files: 0, Unreadable Bytes: 0.
Current Physical
   File (bytes): 322,412,071
Waiting for access to
   input volume AFW502L1 (17096
seconds).
Current
   output volume:
V:\TSM\RECLAIMPOOL1\1DF4.BFS-
   .

 179 MigrationDisk Storage Pool DB, Moved Files: 0,
Moved
   Bytes: 0, Unreadable Files: 0,
Unreadable Bytes:
   0. Current Physical File (bytes):
   466,866,176
Waiting for mount point in device
   class LTOCLASS (16581 seconds).

 180 MigrationDisk Storage Pool DB, Moved Files: 0,
Moved
   Bytes: 0, Unreadable Files: 0,
Unreadable Bytes:
   0. Current Physical File (bytes):
57,344
Waiting
   for mount point in device class
LTOCLASS (16581
   seconds).

 181 MigrationVolume
V:\TSM\RECLAIMPOOL1\1DF4.BFS (storage
   pool RECLAIMTPDB), Moved Files: 0,
Moved Bytes:
   0, Unreadable Files: 0, Unreadable
Bytes: 0.
   Current Physical File (bytes):
   444,760,197
Waiting for access to input volume
   V:\TSM\RECLAIMPOOL1\1DF4.BFS
(16426
   seconds).
Current output volume: AFW502L1.

 193 Space ReclamationOffsite Volume(s) (storage pool
TPBACSAFE), Moved
   Files: 2804, Moved Bytes: 3,446,341,
Unreadable
   Files: 0, Unreadable Bytes: 0.
Current Physical
   File (bytes): 451,052,421
Waiting for mount
   points in device classes LTOCLASS and
DLTCLASS
   (8408 seconds).


tsm: AOHBACKUP01q devc

DeviceDevice Storage DeviceFormat  Est/Max
Mount
Class AccessPool Type Capacity
Limit
Name  Strategy Count  (MB)
- -- --- - -- 
--
DISK  Random   8
DISKDBBA- Sequential   0 FILE  DRIVE  20,000.0
1
 CKUP
DISKTMP   Sequential   0 FILE  DRIVE   2,000.0
4
DLTCLASS  Sequential   7 DLT   DLT1C  40,960.0
DRIVES (=3)
LTOCLASS  Sequential   5 LTO   DRIVE  102,400.
DRIVES (=2)
 0
RECLAIM1  Sequential  10 FILE  DRIVE   5,120.0
6

tsm: AOHBACKUP01


AW: Retaining Deleted files

2004-03-26 Thread Salak Juraj
yup, it will do.
It will cause the tsm server grow NOLIMIT as well :-)

Alternatively,
you can offer incrementals to files keep 6 or 7 months only
and do archives with deletefiles option for older files.

this differntiates from the upper solution in that:

- you would be responsible for deleting files, not the custommer
 (responsibility move, more service from you)

- there would be 100% security that only files backed-up are deleted

- TSM storage usage and media maintenance, redundancy etc. could be easily
 differentiated  for young and old files

- at later timer, you could delete portions of your customer?s archived
files (in contrast to incremental
where you only can delete backup of whole file system easily)

- on the other side, only the very last version of files would be kept
forever, not the last two

- assumption: there is an easy way for you/tarchive sm to determine which
 files are to be considered to be over 6 months old

regards


Juraj

-Ursprungliche Nachricht-
Von: Yiannakis Vakis [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 26. Marz 2004 10:15
An: [EMAIL PROTECTED]
Betreff: Retaining Deleted files


Hi,
I have the following situation and I need a confirmation here. System
information is : TSM 5.2. on Windows 2000, TSM BA client  5.2. on Windows
2000.

I'm backing up a folder with the incremental option, that contains a
complex subfolder tree. The contents of the subfolders are mainly
transaction files from various applications. The user wants to keep the
transaction files for 6 months on disk and then deletes the older ones (the
6th month) to make space for the new month. This is a user task and here I
don't have any involvement.
The user doesn't want to lose the deleted transaction files, because he
might want to restore something for investigation reasons.

I have set up a special policy for this. The backup copygroup definition is
as follows:

Policy Domain Name  TSM_DOMAIN
Policy Set Name ACTIVE
Mgmt Class Name TSM_MGMT_CLASS
Copy Group Name STANDARD
Versions Data Exists2
Versions Data Deleted   2
Retain Extra Versions   NOLIMIT
Retain Only Version NOLIMIT
Copy Mode   MODIFIED
Copy Serialization  SHRSTATIC
Copy Frequency  0
Copy Destination3590STGPOOL
Table of Contents (TOC) Destination -
Last Update Date/Time   2004-03-12 12:37:03.00
Last Update by (administrator)  YIANNAKIS
Managing profile-

I have specified NOLIMIT on retain extra versions and retain only version so
that I don't lose the deleted files. Is that correct ?
Thanks
Yiannakis


AW: define script problem

2004-03-26 Thread Salak Juraj
Hi,
you are on the right path, it has to do with parameters parsing,
an issue common to all command line interpreteres, including tsm command
line.
You DEFINE SCRIPT command is interpreted as follows:

parameter1=VAULTING 

parameter2=select volume_name 
Why it end here? It is a string defined by pair of quotes.

parameter3=Volumes 
parameter4=to 
etc. etc.

Solution: Tell the (dsmadmc) interpreter not to interpret some quotes, e.g.
those two:
Volumes to go offsite
but to accept them as characters.

How to do it?
While many common shells accept an special character for it, e.g. backslash:

select volume_name \Volumes to go offsite\, voltype
as \Volume Type\ from drmedia where state=\MOUNTABLE \ order by
volume_name

would be in many shells interpreted like that (numbers mean character number
#) :

1  opening quote, defines begining of a string, 
thus any space characters found within will NOT be interpreted as
delimiter
2  scommon character, pass it further as (a part of a ) string
3  ecommon character, pass it further as (a part of a ) string
4  lcommon character, pass it further as (a part of a ) string
5  ecommon character, pass it further as (a part of a ) string
6  ccommon character, pass it further as (a part of a ) string
7  tcommon character, pass it further as (a part of a ) string
8   [space] normall parameter delimiter, but because quoted 
it is common character, pass it further as (a part of a ) string
9  vcommon character, pass it further as (a part of a ) string
10 ocommon character, pass it further as (a part of a ) string
... etc. etc.
21 \backlash means: not to interpret the following character but accept
it as a common character
22 because of above, do not interpret as quote, only pass it further as
(a part of a ) string
23 Vcommon character, pass it further as (a part of a ) string
24 ocommon character, pass it further as (a part of a ) string
25 lcommon character, pass it further as (a part of a ) string
etc. etc.
115 common character, pass it further as (a part of a ) string
116closing qoute, end of quoted string
117  [space] parameter delimiter, thus the above´s string is to be
understodd as end of one parameter


BUT I am afraid backslash 
does not work this way in dsmadmc interpreter, so you are to use quoting of
quotes instead:

select volume_name Volumes to go offsite, voltype
as Volume Type from drmedia where state=MOUNTABLE  order by
volume_name

Wait - I always have doubts about proper count of  quotes,
While I guess I made a proper suggestion for this case,
play around, maybe  is to be used here instead :-)))

hope I was not annoying
Juraj







-Ursprüngliche Nachricht-
Von: Smith, Rachel [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 26. März 2004 16:03
An: [EMAIL PROTECTED]
Betreff: define script problem


Hi,

I am trying to create a script that will produce a list of media to send
offsite.
I defined a script:
DEFINE SCRIPT VAULTING select volume_name Volumes to go offsite, voltype
as Volume Type from drmedia where state=MOUNTABLE  order by volume_name
desc=Create vault lists
And it fails with:  ANR2023E DEFINE SCRIPT: Extraneous parameter

I know it is something to do with the double quotes, but I also tried single
quotes but it failed with the same error.
Could someone tell me what I'm missing.

Thanks again.


AW: TSM server on a Domain controller w2k3.

2004-03-17 Thread Salak Juraj
Hello,
it makes some obstacles to restore the DC from TSM 
in case the W2K3 - either OS or HW -  has a problem
regards
Juraj



-Ursprüngliche Nachricht-
Von: Henrik Wahlstedt [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 16. März 2004 20:13
An: [EMAIL PROTECTED]
Betreff: TSM server on a Domain controller w2k3.


Hello,

I cant think up or find any 'No´s..' in the archive, but are there any
problems/issues having a DC (w2k3) and TSM server on the same machine?
Except load etc..


//Henrik



---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


AW: Simple Backup Copy Group Question

2004-03-17 Thread Salak Juraj
yes it is.
I´d only add  around the directory/file specification.

Because you keep you keep 3 file versions for 180 days
there is one extra thing to be considered: 
be aware of renaming files, directories or moving files to
different directories.
Once this happens, next tsm backup will expire 
all extra version thus eliminating your efforts
to keep extra backup versions for long time.

regards
Juraj



-Ursprüngliche Nachricht-
Von: Farren Minns [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 17. März 2004 09:16
An: [EMAIL PROTECTED]
Betreff: Simple Backup Copy Group Question


Hi TSMers

Ok, I think I know what I'm doing, but I just want to check something to
make sure (call ma paranoid if you like).

Running TSM 5.1.6.2 on Solaris. Backing up a Solaris 5.1.6.0 client.

I want to back up one directory and all it's sub dir's with a different
Management Class to the rest of the server. Now, I have set up a new
management class and made sure it is active. The class in question is
called RETDEL750 and the Backup Copy Group under this looks as follows :-

Versions Data Exists        3
Versions Data Deleted        1
Retain Extra Versions        180
Retain Only Version        750

This means that when a file gets deleted from the server, we'll still have
it in backup for 750 days. This much is fine.

Now, under the client option set for this server I have added an include
statement as follows :-

include /app/production/prodlive/jasper/data/content/loading/.../*
retdel750

Is this the correct way to associate the backup copy group and make sure
that it is applied to all files and sub dir's.

Thanks in advance

Farren Minns - John Wiley  Sons Ltd


AW: AW: Simple Backup Copy Group Question

2004-03-17 Thread Salak Juraj
 Regarding the   arround the dir/filespec, is this necessary to make it
work?
Surely not with W2K clients, as long as there are no spaces in dir/file
names.

I am not sure about unix clients, because on command line
behaves the file names expansion differently (actually 
because of the differeniece in command line shells,
not in tsm clients), namely the asterix 
is beeing expanded through the shell prior the (tsm) 
program start and passed to
the tsm client as series of unique dir/file names
eg.g.
dsmc incr /dir/*txt 
would for example actually start
dsmc incr /dir/a.txt /dir/b.txt /dir/c.txt
(assuming there are exactly this 3 txt files)

This behaviour is very probably not the case with 
asterixes in .opt files, 
because the .opt file is fully under the control
of tsm clinet as opposite of beeing interpreted by command line shell.

But maybe, though not very likely,
the TSM designers decided to make the .opt file
behave similar to command line and expand
the asterixes 
include *.txt mgmtclass
at the very moment of the program start.
This WOULD make a difference - e.g.
a later created file d.txt would not be seen as part of 
this include.

As I am more interested in this think working
and less in knowing this detail,
I simply always put  around.

Second: 
I usually have few dir´s containing spaces,
there i must use quotes anyway.
I prefer things to be nice, means in this case
having very samy syntax, so I use quotes 
on all includes.

:)
Juraj






-Ursprüngliche Nachricht-
Von: Farren Minns [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 17. März 2004 09:55
An: [EMAIL PROTECTED]
Betreff: Re: AW: Simple Backup Copy Group Question


Thanks for that

Regarding the extra version, I don't think it matters here as there;s
prolly only ever one version anyway. Regarding the   arround the dir/file
spec, is this necessary to make it work?

Thanks

Farren
|+---+---|
||   Salak Juraj |   |
||   [EMAIL PROTECTED] |   |
||   Sent by: ADSM: Dist Stor   |   To:       [EMAIL PROTECTED] |
||   Manager|           cc: |
||   [EMAIL PROTECTED]  |           Subject:        AW: Simple  |
||   |   Backup Copy Group Question  |
||   03/17/2004 08:56 AM |   |
||   Please respond to ADSM:|   |
||   Dist Stor Manager  |   |
||   |   |
|+---+---|








yes it is.
I´d only add  around the directory/file specification.

Because you keep you keep 3 file versions for 180 days
there is one extra thing to be considered:
be aware of renaming files, directories or moving files to
different directories.
Once this happens, next tsm backup will expire
all extra version thus eliminating your efforts
to keep extra backup versions for long time.

regards
Juraj



-Ursprüngliche Nachricht-
Von: Farren Minns [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 17. März 2004 09:16
An: [EMAIL PROTECTED]
Betreff: Simple Backup Copy Group Question


Hi TSMers

Ok, I think I know what I'm doing, but I just want to check something to
make sure (call ma paranoid if you like).

Running TSM 5.1.6.2 on Solaris. Backing up a Solaris 5.1.6.0 client.

I want to back up one directory and all it's sub dir's with a different
Management Class to the rest of the server. Now, I have set up a new
management class and made sure it is active. The class in question is
called RETDEL750 and the Backup Copy Group under this looks as follows :-

Versions Data Exists        3
Versions Data Deleted        1
Retain Extra Versions        180
Retain Only Version        750

This means that when a file gets deleted from the server, we'll still have
it in backup for 750 days. This much is fine.

Now, under the client option set for this server I have added an include
statement as follows :-

include /app/production/prodlive/jasper/data/content/loading/.../*
retdel750

Is this the correct way to associate the backup copy group and make sure
that it is applied to all files and sub dir's.

Thanks in advance

Farren Minns - John Wiley  Sons Ltd


AW: AW: AW: Simple Backup Copy Group Question

2004-03-17 Thread Salak Juraj
 I actually set up all include / exclude statements within the TSM server
itself

I see, the cloptset editing  interfacece is rather horrific,
it is quite tricky to use  in cloptsets, isn´t it?

Juraj


AW: AW: AW: AW: Simple Backup Copy Group Question

2004-03-17 Thread Salak Juraj
well, 
I am not satisfied with the clopstes editor at all.
Try using  (having blanks in path  forces you to do so)
try to edit path in an existing inclexcl statement,
try to gain an overwiev of particulary complicated clopstes and/or inclexcl
statements in combination with local opt files and YES/NO parameters.

It works flawelessly but it is back-breaking work,
like editing text files with EDLIN.

Juraj


-Ursprüngliche Nachricht-
Von: Farren Minns [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 17. März 2004 11:06
An: [EMAIL PROTECTED]
Betreff: Re: AW: AW: AW: Simple Backup Copy Group Question


I think it's quite simple. I only ever really do exclude statements, and I
never have to use   for those, so I assume it's the same for include
statements.

Just don't want to find out in two years time that we have files missing.

Farren


|+---+|
||   Salak Juraj ||
||   [EMAIL PROTECTED] ||
||   Sent by: ADSM: Dist Stor   |   To:       [EMAIL PROTECTED]  |
||   Manager|           cc:  |
||   [EMAIL PROTECTED]  |           Subject:        AW: AW: AW:  |
||   |   Simple Backup Copy Group Question|
||   03/17/2004 10:04 AM ||
||   Please respond to ADSM:||
||   Dist Stor Manager  ||
||   ||
|+---+|






 I actually set up all include / exclude statements within the TSM server
itself

I see, the cloptset editing  interfacece is rather horrific,
it is quite tricky to use  in cloptsets, isn´t it?

Juraj


annoying: \TSM\ program directory access rights lost

2004-03-15 Thread Salak Juraj
Hello folks,
has anyone else experienced this:

W2K/German  TSM Client v5.2.2.0 installed in default path
C:\Programme\tivoli\tsm\baclient\
Default access right from W2K are set in these directories,
inhereted from C:\Programme\

At some time, the inheritation is broken at \TSM\ directory
and access rights for all subdirectories
are set to redonly for everybody including administrators.

On the positive side, it conserves disk space by preventing the growth of
log files :-)
on the negative side, it prevents dsm from beeing started.

No other program directories have been affected by till now.
It happenes at times when no real users are logged in.
It happens on 2 different servers (bot member of same AD domain).
We play around with event log but did not succeed in finding the originator
yet.

??
Juraj Salak


AW: TSM and VMWare

2004-03-12 Thread Salak Juraj
Ilja,
did you try a restore or only a backup?
Juraj

-Ursprüngliche Nachricht-
Von: Coolen, IG (Ilja) [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 12. März 2004 09:14
An: [EMAIL PROTECTED]
Betreff: Re: TSM and VMWare


Yup,

I've done that, and it works fine,
although you have to specify the /vmfs/filesystemname/* directory
explicitly, because TSM doesn't recognize it as a filesystem during it's
scan of all-local filesystems.


Ilja G. Coolen


  _  

ABP / USZO  
CIS / BS / TB / Storage Management  
Telefoon : +31(0)45  579 7938   
Fax  : +31(0)45  579 3990   
Email: [EMAIL PROTECTED]
Centrale Mailbox : Centrale Mailbox - BS Storage (eumbx05)

  _  

- Everybody has a photographic memory, some just don't have film. - 



-Original Message-
From: Rogelio Bazán Reyes [mailto:[EMAIL PROTECTED] 
Sent: vrijdag 12 maart 2004 2:13
To: [EMAIL PROTECTED]
Subject: TSM and VMWare

Anyone has tried to backup or restore a vmware file system on a VMware
server?

regards

--
-
Rogelio Bazán Reyes
Grupo Financiero Santander Serfín
Soporte Técnico
Tlalpan 3016. Col Espartaco
C.P. 04870
D.F., México
Tel. +52 +55 51741100 ext.19321 
+52 +55 51741953
-


=DISCLAIMER=


De informatie in dit e-mailbericht is vertrouwelijk en uitsluitend bestemd
voor de geadresseerde. Wanneer u dit bericht per abuis ontvangt, verzoeken
wij u contact op te nemen met de afzender per kerende e-mail. Verder
verzoeken wij u in dat geval dit e-mailbericht te vernietigen en de inhoud
ervan aan niemand openbaar te maken. Wij aanvaarden geen aansprakelijkheid
voor onjuiste, onvolledige dan wel ontijdige overbrenging van de inhoud van
een verzonden e-mailbericht, noch voor daarbij overgebrachte virussen.

The information contained in this e-mail is confidential and may be
privileged. It may be read, copied and used only by the intended recipient.
If you have received it in error, please contact the sender immediately by
return e-mail; please delete in this case the e-mail and do not disclose its
contents to any person. We don't accept liability for any errors, omissions,
delays of receipt or viruses in the contents of this message which arise as
a result of e-mail transmission.


AW: Retiring LTO tapes

2004-03-12 Thread Salak Juraj
another very usefull commands in this context are  restore volume 
and/or restore stg
I love this two very much and highly appreciate related TSM concepts  :-)
Juraj

-Ursprüngliche Nachricht-
Von: Mitch Sako [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 11. März 2004 23:02
An: [EMAIL PROTECTED]
Betreff: Re: Retiring LTO tapes


I share the same policy.  If I see any sort of error on a tape that looks
like it's jeopardizing the repository, I immediately move the data off
using 'move data' or worst case I delete the volume using discard=yes.  The
absolute worst possible thing I can think of is going to restore someone's
file and finding out that it's unavailable for some reason.  That's my
biggest nightmare, and luckily in my 15+ years of using ESMS/WDSF/ADSM/TSM
that has not happened yet.

The cost of the tape or the time needed to get rid of it is minuscule
compared to the value of a file that a user wants back really badly and
can't have back because it's not backed up reliably.

At 3/11/2004 07:41 AM Thursday, you wrote:
I have not found a useful policy. If I start getting write errors on tapes
I make them a candidate for retirement, no matter how new or old they are.


AW: AW: TSM and VMWare

2004-03-12 Thread Salak Juraj
it´s good news, thanks Phil!

would you mind to share with us steps to restore the vmware OS itself?

Juraj

-Ursprüngliche Nachricht-
Von: Phil Jones [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 12. März 2004 09:59
An: [EMAIL PROTECTED]
Betreff: Re: AW: TSM and VMWare


Salak,

we've just done a DR test and the one thing that worked really well was the
restores of VM's.

Cheers

Phil Jones
Technical Specialist
United Biscuits
Tel (external) 0151 4735972
Tel (internal) 755 5972
e-mail: [EMAIL PROTECTED]


   
  Salak Juraj  
  [EMAIL PROTECTED]To:
[EMAIL PROTECTED]
  T   cc: 
  Sent by: ADSM:  Subject:  AW: TSM and VMWare
  Dist Stor
  Manager 
  [EMAIL PROTECTED]
  .EDU
   
   
  12/03/2004 09:07 
  Please respond to
  ADSM: Dist Stor 
  Manager 
   
   




Ilja,
did you try a restore or only a backup?
Juraj

-Ursprüngliche Nachricht-
Von: Coolen, IG (Ilja) [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 12. März 2004 09:14
An: [EMAIL PROTECTED]
Betreff: Re: TSM and VMWare


Yup,

I've done that, and it works fine,
although you have to specify the /vmfs/filesystemname/* directory
explicitly, because TSM doesn't recognize it as a filesystem during it's
scan of all-local filesystems.


Ilja G. Coolen


  _

ABP / USZO
CIS / BS / TB / Storage Management
Telefoon : +31(0)45  579 7938
Fax  : +31(0)45  579 3990
Email: [EMAIL PROTECTED]
Centrale Mailbox : Centrale Mailbox - BS Storage (eumbx05)

  _

- Everybody has a photographic memory, some just don't have film. -



-Original Message-
From: Rogelio Bazán Reyes [mailto:[EMAIL PROTECTED]
Sent: vrijdag 12 maart 2004 2:13
To: [EMAIL PROTECTED]
Subject: TSM and VMWare

Anyone has tried to backup or restore a vmware file system on a VMware
server?

regards

--
-
Rogelio Bazán Reyes
Grupo Financiero Santander Serfín
Soporte Técnico
Tlalpan 3016. Col Espartaco
C.P. 04870
D.F., México
Tel. +52 +55 51741100 ext.19321
+52 +55 51741953
-


=DISCLAIMER=



De informatie in dit e-mailbericht is vertrouwelijk en uitsluitend bestemd
voor de geadresseerde. Wanneer u dit bericht per abuis ontvangt, verzoeken
wij u contact op te nemen met de afzender per kerende e-mail. Verder
verzoeken wij u in dat geval dit e-mailbericht te vernietigen en de inhoud
ervan aan niemand openbaar te maken. Wij aanvaarden geen aansprakelijkheid
voor onjuiste, onvolledige dan wel ontijdige overbrenging van de inhoud van
een verzonden e-mailbericht, noch voor daarbij overgebrachte virussen.

The information contained in this e-mail is confidential and may be
privileged. It may be read, copied and used only by the intended recipient.
If you have received it in error, please contact the sender immediately by
return e-mail; please delete in this case the e-mail and do not disclose
its
contents to any person. We don't accept liability for any errors,
omissions,
delays of receipt or viruses in the contents of this message which arise as
 a result of e-mail transmission.


AW: TSM and VMWare

2004-03-12 Thread Salak Juraj
Hi,

there is a misunderstanding somewhere.
I thought Rogelio was asking about backup of the vmware OS itself,
not about virtual machines.

The task you talk about - of backing up / restoring virtual machines
is working like a breeze, it is really a pleasure and
a viable disaster recovery solution
for Windows servers.

Juraj




-Ursprüngliche Nachricht-
Von: Coolen, IG (Ilja) [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 12. März 2004 10:42
An: [EMAIL PROTECTED]
Betreff: Re: TSM and VMWare


Of course we did both.
It looks and feels just like a simple file backup and restore, but you have
to make sure to also 
Include the configuration file of the guest OS in the backup and restore
actions, because they rely on each other.

Let say we have an guest os running in
/vmfs/sharkdisk/Windows2003-server.dsk
The configuration of this OS is stored in a .vmx file in the /root/vmware
directory. You can dictate the locations of these files, so this could vary.
But you need these files when you want to run the guest OS.

Grtz.

Ilja

-Original Message-
From: Salak Juraj [mailto:[EMAIL PROTECTED] 
Sent: vrijdag 12 maart 2004 10:08
To: [EMAIL PROTECTED]
Subject: AW: TSM and VMWare

Ilja,
did you try a restore or only a backup?
Juraj

-Ursprüngliche Nachricht-
Von: Coolen, IG (Ilja) [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 12. März 2004 09:14
An: [EMAIL PROTECTED]
Betreff: Re: TSM and VMWare


Yup,

I've done that, and it works fine,
although you have to specify the /vmfs/filesystemname/* directory
explicitly, because TSM doesn't recognize it as a filesystem during it's
scan of all-local filesystems.


Ilja G. Coolen


  _  

ABP / USZO  
CIS / BS / TB / Storage Management  
Telefoon : +31(0)45  579 7938   
Fax  : +31(0)45  579 3990   
Email: [EMAIL PROTECTED]
Centrale Mailbox : Centrale Mailbox - BS Storage (eumbx05)

  _  

- Everybody has a photographic memory, some just don't have film. - 



-Original Message-
From: Rogelio Bazán Reyes [mailto:[EMAIL PROTECTED]
Sent: vrijdag 12 maart 2004 2:13
To: [EMAIL PROTECTED]
Subject: TSM and VMWare

Anyone has tried to backup or restore a vmware file system on a VMware
server?

regards

--
-
Rogelio Bazán Reyes
Grupo Financiero Santander Serfín
Soporte Técnico
Tlalpan 3016. Col Espartaco
C.P. 04870
D.F., México
Tel. +52 +55 51741100 ext.19321 
+52 +55 51741953
-


=DISCLAIMER=


De informatie in dit e-mailbericht is vertrouwelijk en uitsluitend bestemd
voor de geadresseerde. Wanneer u dit bericht per abuis ontvangt, verzoeken
wij u contact op te nemen met de afzender per kerende e-mail. Verder
verzoeken wij u in dat geval dit e-mailbericht te vernietigen en de inhoud
ervan aan niemand openbaar te maken. Wij aanvaarden geen aansprakelijkheid
voor onjuiste, onvolledige dan wel ontijdige overbrenging van de inhoud van
een verzonden e-mailbericht, noch voor daarbij overgebrachte virussen.

The information contained in this e-mail is confidential and may be
privileged. It may be read, copied and used only by the intended recipient.
If you have received it in error, please contact the sender immediately by
return e-mail; please delete in this case the e-mail and do not disclose its
contents to any person. We don't accept liability for any errors, omissions,
delays of receipt or viruses in the contents of this message which arise as
a result of e-mail transmission.


=DISCLAIMER=


De informatie in dit e-mailbericht is vertrouwelijk en uitsluitend bestemd
voor de geadresseerde. Wanneer u dit bericht per abuis ontvangt, verzoeken
wij u contact op te nemen met de afzender per kerende e-mail. Verder
verzoeken wij u in dat geval dit e-mailbericht te vernietigen en de inhoud
ervan aan niemand openbaar te maken. Wij aanvaarden geen aansprakelijkheid
voor onjuiste, onvolledige dan wel ontijdige overbrenging van de inhoud van
een verzonden e-mailbericht, noch voor daarbij overgebrachte virussen.

The information contained in this e-mail is confidential and may be
privileged. It may be read, copied and used only by the intended recipient.
If you have received it in error, please contact the sender immediately by
return e-mail; please delete in this case the e-mail and do not disclose its
contents to any person. We don't accept liability for any errors, omissions,
delays of receipt or viruses in the contents of this message which arise as
a result of e-mail transmission.


AW: Retiring LTO tapes

2004-03-11 Thread Salak Juraj
I had searched for it 3 years ago ando found out 
the allowed number of mount / read /write passes to be very high,
with the conclusion for my usage I do not have to carry about at all
(in contrast to DDS3 tapes where the limits were very, very low).

I cannot support you with exact information, 
but as far as I can remember I believe 30.000 
or some another very high number of read/write passes are allowed.

I am sorry I have no idea about the aging.

You may want to ask IBM or Fuji or search on www.lto.org

regards
Juraj

-Ursprüngliche Nachricht-
Von: Stephen Comiskey [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 11. März 2004 13:00
An: [EMAIL PROTECTED]
Betreff: Retiring LTO tapes


Hi,

I've a simple question for the group, we are using a mixture of IBM and
FujiFilm LTO1 tapes in a IBM3584 library.  Does anyone have a policy (or
your thoughts) on retiring tapes, i.e. after X number of read/writes or X
years service?

Looking forward to your responses

regards
Stephen Comiskey
Dublin, Ireland



__

Notice of Confidentiality

This transmission contains information that may be confidential and that may
also be privileged. Unless you are the intended recipient of the message (or
authorised to receive it for the intended recipient) you may not copy,
forward, or otherwise use it, or disclose it or its contents to anyone else.
If you have received this transmission in error please notify us immediately
and delete it from your system.

Email: [EMAIL PROTECTED]


AW: Select for finding if a file on tape already has a copy in a copypool

2004-03-10 Thread Salak Juraj
Hi,
would not 
backup stg from-pool to-pool PREVIEW=YES
do?
Juraj


-Ursprüngliche Nachricht-
Von: PAC Brion Arnaud [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 14:20
An: [EMAIL PROTECTED]
Betreff: Select for finding if a file on tape already has a copy in a
copypool


Hi all,

I'm trying to build a query which could estimate the remaining time
before my copy stgpool jobs ends. A first step would be to find out
which files still have no copy in copypools. Therefore I would like to
know in which table and field name I could find this information. Must
be existing somewhere, because command q con xxx copied=no gives the
result 
Table named contents has following fields:
VOLUME_NAME 
NODE_NAME 
TYPE 
FILESPACE_NAME 
FILE_NAME 
AGGREGATED 
FILE_SIZE 
SEGMENT 
CACHED 
FILESPACE_ID 
FILESPACE_HEXNAME 
FILE_HEXNAME 
But which one means has a copy ? Or is it anywhere else ?
Thanks for your help !
 Arnaud Brion

***
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]
***


AW: Antwort: Restore Volume Problems

2004-03-10 Thread Salak Juraj
audit volume xxx fix=yes/no ??
Juraj

-Ursprüngliche Nachricht-
Von: Patrick Rainer [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 08:28
An: [EMAIL PROTECTED]
Betreff: Antwort: Restore Volume Problems


If there is no copy of the volumedata stored on a copy storage pool, you
cannot restore the volume because there is no alternative data source.
Check the difference between your primary storage pool and your copy
storage pool.

Every time i have such a problem, I set the tape to unavailable and wait
till the tape gets less utilized.
Then i check the importance of the data left on the tape. If there are no
important files left, I simply delete the tape (with discarddata=yes).

If there are any other solutions, I also would appreciate to hear them!

Best regards

Patrick Rainer




David Benigni [EMAIL PROTECTED]
Gesendet von: ADSM: Dist Stor Manager [EMAIL PROTECTED]

10.03.2004 03:30
Bitte antworten zu
ADSM: Dist Stor Manager [EMAIL PROTECTED]


An
[EMAIL PROTECTED]
Kopie

Thema
Restore Volume Problems






Recently I had a tape get destroyed in a tape drive.  I did the typical
procedure in the Tivoli Admin Manual to restore the volume.

All the tapes specified in the restore vol x prev=yes where brought
on site.  However after we did the restore I got a warning ANR1256W
regarding some files couldn't be restored.  Doing a restore vol x
prev=yes doesn't show any volumes that are needed to restore it.
Reclamation has not occured since the tape was destroyed.  The tape is
beyond repair and can't do a move data from it.

Has anyone run across this?  Any recommendations?


TIA

Dave


AW: TSM Scheduling

2004-03-10 Thread Salak Juraj
Hi christo,

this is question to improve my english: 

 My backups are scheduled to start at 01:00, but didn't start till 02:41
 Monday  01:21 today. Any idea why?

Does it tell the schedule did run at 02:41 for sure
or is this only a probable explanation?

regards
Juraj



-Ursprüngliche Nachricht-
Von: Christo Heuer [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 10:45
An: [EMAIL PROTECTED]
Betreff: Re: TSM Scheduling


Simple answer - the fact that the schedule actually ran means there IS an
association - the fact that it does not run at the set time has to do with
the schedule randomization % that is set on the TSM server and the length of
the start-up window for the schedule for this client.
Simple example:
Schedule randomization=25%
Length of start-up window is 1 hour.
Client schedule is set at 01:00.

The randomization will cause the schedule to physically kick off anywhere
between 01:00 and 01:15. (25% of hour).

Hope this explains it.

Cheers
Christo

==
If it happens to me I know I have forgotten again to define an association.

Check with Q EVE whether you really have your schedules scheduled.
If not, search after an error in your schedule /associations definitions.
If yes, search after an error in Q ACTL and in dsmsched.log,dsmerror.log on
your clients - assuming your client tsm schedulres are running at all :-)

regards
Juraj


-Ursprüngliche Nachricht-
Von: VANDEMAN, MIKE (SBCSI) [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 02:31
An: [EMAIL PROTECTED]
Betreff: TSM Scheduling


My backups are scheduled to start at 01:00, but didn't start till 02:41
Monday  01:21 today. Any idea why?

Mike Vandeman
510-784-3172
UNIX SSS
 (888) 226-8649 - SSS Helpdesk



__

E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


AW: AW: TSM Scheduling - a lingusitic subthread

2004-03-10 Thread Salak Juraj
Hi christo,

I know very well about how schedules work :-)
I asked you a linguistic question.
I try to be more precise:

My understanding of Mike´s sentences per se is: 
The schedules did not run untill 02:41.
The schedules may have run later, 
but Mike is either not sure about or not exact at this point.

Your understanding is: the schedules did run at 02:41.
Is this fully clear from Mike´s sentences?
Or is this a conclusion from the sentence as written PLUS
your assumption based on your general TSM knowledge,
while the english sentence alone,
without using TSM knowledge to transform syntax into semantics
would result into my version?


:) Juraj


-Ursprüngliche Nachricht-
Von: Lawrence Clark [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 15:49
An: [EMAIL PROTECTED]
Betreff: Re: AW: TSM Scheduling


Hi Juraj,

The backups will attempt to start anytime in the time set for duration.

So if you have your backups scheduled to start at 1AM, and a duration of 2
hours, the backups could start as late as 3AM. They don't have to complete
within the time of duration, just start.

 [EMAIL PROTECTED] 03/10/2004 9:26:42 AM 
Hi christo,

this is question to improve my english: 

 My backups are scheduled to start at 01:00, but didn't start till 02:41
 Monday  01:21 today. Any idea why?

Does it tell the schedule did run at 02:41 for sure
or is this only a probable explanation?

regards
Juraj



-Ursprüngliche Nachricht-
Von: Christo Heuer [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 10. März 2004 10:45
An: [EMAIL PROTECTED] 
Betreff: Re: TSM Scheduling


Simple answer - the fact that the schedule actually ran means there IS an
association - the fact that it does not run at the set time has to do with
the schedule randomization % that is set on the TSM server and the length of
the start-up window for the schedule for this client.
Simple example:
Schedule randomization=25%
Length of start-up window is 1 hour.
Client schedule is set at 01:00.

The randomization will cause the schedule to physically kick off anywhere
between 01:00 and 01:15. (25% of hour).

Hope this explains it.

Cheers
Christo

==
If it happens to me I know I have forgotten again to define an association.

Check with Q EVE whether you really have your schedules scheduled.
If not, search after an error in your schedule /associations definitions.
If yes, search after an error in Q ACTL and in dsmsched.log,dsmerror.log on
your clients - assuming your client tsm schedulres are running at all :-)

regards
Juraj


-Ursprüngliche Nachricht-
Von: VANDEMAN, MIKE (SBCSI) [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 10. März 2004 02:31
An: [EMAIL PROTECTED] 
Betreff: TSM Scheduling


My backups are scheduled to start at 01:00, but didn't start till 02:41
Monday  01:21 today. Any idea why?

Mike Vandeman
510-784-3172
UNIX SSS
 (888) 226-8649 - SSS Helpdesk



__

E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer 


AW: Client backup request form

2004-03-10 Thread Salak Juraj
Hi,
 
maybe not exactly what you need, but mybe it helps.
 
I did following: top-ten nodes Listing available for everybody   
This creates som social pressure on high-end users,
and gives a good communication basis when hardware upgrades
of TSM server are necessary ($$ influence even without chargeback).
 
My backup request form is merely a juice of TSM Manual - of its description
of backup management class parameters, 
simplified for non-TSM specialists and translated into german. 
It tells in the nutshell:
 
 I (user) want to be able to restore any file in the state of last night. I
understand that leaving a file opene during the night
will prohibit this ability. 
If I delete a file, I want to be able to restore its last version even NNN
days later.
If I edit a file, I want to be able to restore up to NNN last version for at
least NNN days after last edit.  
etc.
etc.
I understand I can have special long-time backup requests fulfilled by the
IT departement, e.g. to 
freeze a project in an important state. The offer is to have archives good
for 1,3,5 or 7 years. 
I understand am supposed to support a description and path for archive and
to keep track of (description and path)
for purposes of later restore requirements.
 
 
best regards
Juraj
 
 
 
 
 

-Ursprüngliche Nachricht-
Von: Shannon Bach [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 17:27
An: [EMAIL PROTECTED]
Betreff: Client backup request form



We are a pretty small company with ITSM on our MVS mainframe.  Recently
we've had a huge increase in servers from all departments in the company.
Up until now the server administrators would just call me or email to create
a client node for them a day or two ahead of time.  Schedule details and
such get worked out later after I keep bugging them for requirements.  Their
preference is; Keep Everything Forever! With this system in place I cannot
plan for future resources and I also need to get the users thinking about
what they really need to have backed up verses not thinking about it at all.
I am going to put into place a Client Node request form with backup
requirements that will have to be filled out first.  I have tried to find a
good sample form or template for this backup request but have been unable to
do so.  Can anyone lead me into the right direction to find such a sample
document?  Looking back in this list just led to old manual's and links that
no longer exist.   We've never done chargebacks and probably never will, so
I won't be using it for that purpose. 

Thanks in advance,   
Shannon Bach


Madison Gas  Electric Co.
Operations Analyst - Data Center Services
e-mail [EMAIL PROTECTED] 


AW: Open File support issue with Client Version 5.2.2

2004-03-09 Thread Salak Juraj
include.fs C: fileleveltype=dynamic

You can search for it both in forum archives (2004) and in tsm docs
Juraj

-Ursprüngliche Nachricht-
Von: Jelf, Jim [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 09. März 2004 14:57
An: [EMAIL PROTECTED]
Betreff: Re: Open File support issue with Client Version 5.2.2


Thanks, I'm new to the forum and I haven't seen this one posted yet. My
command for placing the cache is listed below.

SNAPSHOTCACHELOCATION C:\tmp\

I'm not sure what command I need to use to exclude the C: drive from open
file support.

Thanks
Jim



-Original Message-
From: Salak Juraj [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 05, 2004 2:17 AM
To: [EMAIL PROTECTED]
Subject: AW: Open File support issue with Client Version 5.2.2

Hi,
as already posted here,
you can place cache on C: drive 
and exclude this drive from OFS:
include.fs C: fileleveltype=dynamic
regards
Juraj

-Ursprüngliche Nachricht-
Von: Jelf, Jim [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 04. März 2004 18:52
An: [EMAIL PROTECTED]
Betreff: Open File support issue with Client Version 5.2.2


I'm having an issue where we've installed Open File Support on our clients.
The error message I receive is:



The snapshot cache location cannot be located on the same volume that is
being backed up.



In this particular case, this system is RAID 5 controlled with 3 partitions.
They are C, D,  E. So I'm not sure if it's telling me that I need to put
this on a totally separate physical drive or create an F partition
specifically for snapshots.



This is on a Windows 2000 Server platform running TSM client 5.2.2.



I appreciate any info that you might have on this.



Thanks,

Jim Jelf

Sr. Systems Administrator

Superior Consultant


AW: FUJI Media for LTO-2

2004-03-09 Thread Salak Juraj
I use 30 Fuji´s since a year, no troubles yet

juraj


-Ursprüngliche Nachricht-
Von: Matthias Feyerabend [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 09. März 2004 18:22
An: [EMAIL PROTECTED]
Betreff: FUJI Media for LTO-2


Hello,

prices for FUJI LTO-2 tapes are almost 20% percent lower (less 12 EURO)
than IBM tapes.

And also people say IBM tapes are rebranded FUJI ones, who can resist to
buy the cheaper ones ?

We will do that and buy FUJI for IBM LTO-2 FC, if not ..
somebody had very bad experiences with FUJI.

What do you think ?

--
--
Matthias Feyerabend | [EMAIL PROTECTED]
Gesellschaft fuer Schwerionenforschung  | phone +49-6159-71-2519
Planckstr. 1| privat +49-6151-718781
D-62291 Darmstadt   | fax   +49-6159-71-2519


AW: Open File support issue with Client Version 5.2.2

2004-03-05 Thread Salak Juraj
Hi,
as already posted here,
you can place cache on C: drive 
and exclude this drive from OFS:
include.fs C: fileleveltype=dynamic
regards
Juraj

-Ursprüngliche Nachricht-
Von: Jelf, Jim [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 04. März 2004 18:52
An: [EMAIL PROTECTED]
Betreff: Open File support issue with Client Version 5.2.2


I'm having an issue where we've installed Open File Support on our clients.
The error message I receive is:



The snapshot cache location cannot be located on the same volume that is
being backed up.



In this particular case, this system is RAID 5 controlled with 3 partitions.
They are C, D,  E. So I'm not sure if it's telling me that I need to put
this on a totally separate physical drive or create an F partition
specifically for snapshots.



This is on a Windows 2000 Server platform running TSM client 5.2.2.



I appreciate any info that you might have on this.



Thanks,

Jim Jelf

Sr. Systems Administrator

Superior Consultant


AW: is there a way to.....

2004-03-05 Thread Salak Juraj
Hi, I am afraid this is wrong.
I do not now any way how to store/restore security information only apart
from using special tools.

The one I used to use can be found on www.lanicu.com and could help you,
if it is worth for you to invest couple of hours (download, read manual,
install, test, use). As far as I can remember there used to be test version
donloadable,
fully functional, time restricted.

The scenario would look like that:
1) restore your files(!) along with security information
to another path or another server in the same domain (this is critical)
2) run the iculan tool (intensive care, I believe)
to save security information of files restored.
This will be saved in plain text file.
3) delete all those restored files, you won´t need them any more.
4) edit paths in the text file above so that they point to your original
path.
5) run the iculan tool to apply the security information from the text file
to your original files.

This will NOT save you from restoring all file this time,
but you can labor while others are using your file server.
If this suites to you or not, only you can decide.

And, if you are afraid this situation can repeat
, you could use the iculan
for daily saving of complete domain information to text files,
thus be ready for next disaster, then without need to restore files.

Not an easy, but once in use it works like a breeze, - you decide 
if it is worth the time and money.

No, I am not associated with the producer company by any means,
I only used to be happy with them in my previous company.

regards
Juraj





-Ursprüngliche Nachricht-
Von: Christian Svensson [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 05. März 2004 13:21
An: [EMAIL PROTECTED]
Betreff: SV: is there a way to.


Hi Chris!
I don't know I'm correct. But does'nt all Windows security stays in System
State?
That mean NTFS permissions, User accounts, Groups, Domain settings and et c.
You maybe need to comfrm that with a Windows guy. But if it true. Just
restore the System state and you got the permissions back on your files and
directorys.

/Chrisitan

-Ursprungligt meddelande-
Från: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] i love
tsm
Skickat: den 5 mars 2004 12:15
Till: [EMAIL PROTECTED]
Ämne: is there a way to.


Hi All

I want to know whether there is a way to restore only  ntfs file
permissions on win2k.  We are running tsm server 5.2.2.1 and 5.2 clients.

The reason for this is that last night during some maintenance, a windows
drive (all directories and files) lost all its permissions. The directories
and files are still there so all I want to restore is the permissions...

Anyone know of a way to do this with TSM??

Cheers

Chris

_
Use MSN Messenger to send music and pics to your friends
http://www.msn.co.uk/messenger


AW: Performing Full Client Backups with TSM

2004-03-03 Thread Salak Juraj
This is an evergreen.

Your customers tell you HOW you should server them
and you are only expected to implement they solutin in a tool.

1) you can accept it and let them do

a) monthly ARCHives using a management class MONTHLYARCHIVES (to be defined
by you) 
b) weekle ARCHhives using a management class WEEKLYARCHIVES (to be defined
by you) 
c) daily INCRementals using a DEFAULT managemnt class.

Everybody will be happy, but tha capabilities of TSM will not be used
optimally,
you will produce more bandwidth usage and full tapes as necessary.


2) If you want your custommer invest his $ optimally
ask him to tell you WHAT he needs
in contrast HOW to achieve it.
I strongly suggest definition in terms of required RESTORE possibilities.

e.g. (custommer:)

a) I need to be able o restore 
any version of any file as existed on file server 
during last 30 days. This is true for all files
which still exist on file server.
If a file changes more times a day, I am happy with the las
version of the day, available for backup at the night.

b) If a file is deleted on file server
it is enough to be able to restore the last and last but one version of it,
and I must be able to do in during a week following the deletion date.
further, the very last version of the deleted file has to be restorable
for extra 2 months.

c) further, I want to be able to restore all files from
directories  /finance and /legal in the very same state 
they had been first monday of each month during last 7 years.

d) I do not carry about restoration of operationg system an applications,
I will re-creta them by installation instead.


e) Media fault: 
I want be able to restore all files (a+b) even if a single tape
becomes destroyed. The maximum restore time penalty in such case is 2 hours.
Long-Term Backups (c) have to tolerate even 2 destroyed tapes,
time penalty allowed in this case is 2 days.


f)In case of fire in one building it is OK to loose either original files 
on file servers or backup data (a+b), but not both.
It is not OK to loose long-time data (c) in case of fire in single building.
In case of fire it is tolerable not be able to restore last nights backups,
but last but 2 night backups must be available at least.

d) I understand ( or I do not want that ) I will nned TSM 7 years later in
order to be able to restore 7 years old files


e) some restore time requiremens (single file restored typically during
minutes, 1 GB during a hour)


This said, there is one additional requirement
in case (2) you are expected to understand concepts of TSM!


best regards
Juraj








-Ursprüngliche Nachricht-
Von: Karla Ross [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 02. März 2004 20:11
An: [EMAIL PROTECTED]
Betreff: Performing Full Client Backups with TSM


I read this process on the IBM web page, my question is, my customer wants
different retentions on weekly full backups, monthly full, and daily
incrementals.  Whats the recommended way to do this with TSM?

Karla Ross


AW:

2004-03-03 Thread Salak Juraj
Abdullah,

this is the way TSM works.
Old versions of files are beeing expired leaving holes on tapes.
Look at A0007 maybe a week later, it will not be 100% full any more.
Once there are too many holes on a tape it will be reclaimd 
(the data left copied to another tape) so that it is fully 
free for new backups.

TSM does the job many other progrems require YOU to do,
but in order to operate it succesfully, you *must* 
understand it, you must read administrators manual.

regards
Juraj




-Ursprüngliche Nachricht-
Von: Abdullah, Md-Zaini B BSP-IMI/231
[mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 03. März 2004 08:57
An: [EMAIL PROTECTED]
Betreff: 
Wichtigkeit: Hoch


Hello TSM guru's,
I am using TSM 5.2, level0.0. run on AIX 5.2 .
I wud like to know , why some the the volume status is FULL since the
PCT.UTIL
is not 100 percentage. I used command: q vol stgpool=3584DB.

Any idea ? 


Volume Name Storage Po   Device ClEstimatedPctVolume  
  ol Name   ass Name   Capacity  Util
Status 
(MB) 
  -   --
--  ---
A73584DB   3584DEV 202,546.6   100.0Full

A000113584DB   3584DEV 202,374.899.8
Full  
A000133584DB   3584DEV 202,374.076.6
Full  
A000313584DB   3584DEV 202,217.5   100.0Full

A000333584DB   3584DEV 204,800.014.6
Filling 

Regards,
zain


AW: Antwort: Re: 2 LTO-Drives Load-Balancing

2004-03-03 Thread Salak Juraj
Hello Patrick,

as far as I know TSM maintaines single I/O queues for each volume.
That said, with 2 migration processes running concurrently
you will have up to 2 I/O on tapes at one time (one per drive),
thus allowing for maximal throughput - if the rest of the system
allows :-)

As for how it it is good for you, further mus be considerated:
Certain LTO drives allow slowed-don streaming mode.
I heard rumours in case of HP it is reached by writing additional zero´s
or tape spacing to tape (and ignoring it when reading),
while some other drives should be really able to slow down the
tape movement. Consult your manufacturer.

How much can oyur Disks deliver:
apart from the fact you have apparently stolen 
that random access device from a museum :))),
there is a question: 20 MB per second per what?
Per Disk spindle - and you have got more spindles?
If yes, you won. 
Split your disk storage pool
into more volumes, one per each disk spindle.
As TSM maintaines one I/O per volume
you will have often 2 I/Os on 2 disks in paralell,
one per spindle, thus almost doubling your overall disk
transfer speed.
But if the 20 MB/s is the speed of your
scsi controller, or of your (single) raid array,
than you have got either an investment or a performance problem.


Another point: if you have got many of small files,
your overall disk transfer speed will fall down even further,
as TSM has to group small files first before sending a packet
to tape. This involves databes seraches and disk head repositioning
in your disk storage pools.
In such case, access speed to DB and transfer speed 
of Log can be very important as well.
And amount of RAM.

regards
juraj






-Ursprüngliche Nachricht-
Von: Patrick Rainer [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 03. März 2004 15:36
An: [EMAIL PROTECTED]
Betreff: Antwort: Re: 2 LTO-Drives Load-Balancing


Hello again

Sorry. Maybe my question can be wrong interpreted.
For example:

I have two migration processes from disk stgpool to tape drive.
My diskdrives offer about 20MB/s.
The question:
Is the transferrate of the diskdrives split up for the tape drives to 50%
for each drive? (10MB/s per drive) (Also means 50% per process)
Or uses one drive the full transferrate which is possible for one drive
(15MB/s) and the other drive only gets the rest (5MB/s)?

My problem - 5MB/s means permanent Stop-and-Go-mode for one drive.
I would like to hear, that the TSM-server splits up the transferrate. This
means each migration process/job uses 10MB/s.

Thank you.

Best regards

Patrick



Karel Bos [EMAIL PROTECTED]
Gesendet von: ADSM: Dist Stor Manager [EMAIL PROTECTED]

03.03.2004 15:04
Bitte antworten zu
ADSM: Dist Stor Manager [EMAIL PROTECTED]


An
[EMAIL PROTECTED]
Kopie

Thema
Re: 2 LTO-Drives Load-Balancing






TSM is not load-balancing on a drive level. TSM will use multiple drives
on
a job level.

Regards,

Karel

-Oorspronkelijk bericht-
Van: Patrick Rainer [mailto:[EMAIL PROTECTED]
Verzonden: woensdag 3 maart 2004 14:41
Aan: [EMAIL PROTECTED]
Onderwerp: 2 LTO-Drives Load-Balancing


Hello TSM users, a.a.o.o


I searched the mailinglist archive and the discussion forum, but did not
found a solution for my problem.
We are using TSM 5.2.2.1 for library evaluation. We are testing a HP
MSL5030 with two LTO1 drives.
Our disks provide a datarate of about 20MB/s.
LTO1 supports a datarate of maximum 15MB/s and starts STOP-and-GO-Mode at
about 7MB/s.
My question:
Is the TSM-Server balancing the load of the two drives?
That means: Every drive gets about 10MB/s
Or gets one drive 15 MB/s and the other only 5 MB/s?

We tried to analyze this load, but it isn't that simple. The only way is
to calculate the load out of the transfered bytes and the needed time.
How is it possible to monitor the load of one drive?
(HP told us, we shoud check if the LED's are blinking slow or fast ?!?)

Any ideas??


Best regards

Patrick


AW: Great difference in Tape usage between Netware and Windows!?

2004-03-03 Thread Salak Juraj
Hallo,

turn for a while TSM compression on (in contrast to HW-tape
compression you are probably using now).
This will not change things, but it gives you statistics
how your data are beeing compressed during backup.
Your HW compression does simillar thing with simillar results.

Probably you have files on netware which can be compressed 
very well, maybe modestly used database files.

If your boss does not believe TSM´s statistics
take WinZip or something like that and compress 1 GB of typicall 
data both on Netware and on Windows and compare the resulting file size.


Turn the argumentation to the fact your users on Netware server
use tons of redundant data, most likely consisting to 
two thirds of spaces only :-)

:-)) juraj



-Ursprüngliche Nachricht-
Von: anton walde [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 03. März 2004 16:00
An: [EMAIL PROTECTED]
Betreff: Great difference in Tape usage between Netware and Windows!?


Hello.

I have a question concerning tape usage:

we have two sites, one using Windows and one using Netware. we make the same
at both sites, incremental forever, no special taks like archive or
aomething like that.

Now we see the following:

while the Netware-site is using the LTO1-Tapes with 180-300 GB per tape, the
windows-site is using the same tapes with 120-140 GB par tape.

From where does this great difference come from? Is compression a part of
the problem? or is it a problem of what tsm tells and what really is?

Please help me, my bos is going to kill me, because we need tapes and tapes
for 'no data' while for the big amounts we need none!!!

--
+++ NEU bei GMX und erstmalig in Deutschland: T\V-gepr|fter Virenschutz +++
100% Virenerkennung nach Wildlist. Infos: http://www.gmx.net/virenschutz


AW:

2004-03-03 Thread Salak Juraj
you are welcome!

-Ursprüngliche Nachricht-
Von: Abdullah, Md-Zaini B BSP-IMI/231
[mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 03. März 2004 09:59
An: [EMAIL PROTECTED]
Betreff: 


Juraj,
Thank you

-Original Message-
From: Salak Juraj [mailto:[EMAIL PROTECTED]
Sent: 03 March 2004 16:49
To: [EMAIL PROTECTED]
Subject: AW:


Abdullah,

this is the way TSM works.
Old versions of files are beeing expired leaving holes on tapes.
Look at A0007 maybe a week later, it will not be 100% full any more.
Once there are too many holes on a tape it will be reclaimd 
(the data left copied to another tape) so that it is fully 
free for new backups.

TSM does the job many other progrems require YOU to do,
but in order to operate it succesfully, you *must* 
understand it, you must read administrators manual.

regards
Juraj




-Ursprüngliche Nachricht-
Von: Abdullah, Md-Zaini B BSP-IMI/231
[mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 03. März 2004 08:57
An: [EMAIL PROTECTED]
Betreff: 
Wichtigkeit: Hoch


Hello TSM guru's,
I am using TSM 5.2, level0.0. run on AIX 5.2 .
I wud like to know , why some the the volume status is FULL since the
PCT.UTIL
is not 100 percentage. I used command: q vol stgpool=3584DB.

Any idea ? 


Volume Name Storage Po   Device ClEstimatedPctVolume  
  ol Name   ass Name   Capacity  Util
Status 
(MB) 
  -   --
--  ---
A73584DB   3584DEV 202,546.6   100.0Full

A000113584DB   3584DEV 202,374.899.8
Full  
A000133584DB   3584DEV 202,374.076.6
Full  
A000313584DB   3584DEV 202,217.5   100.0Full

A000333584DB   3584DEV 204,800.014.6
Filling 

Regards,
zain


AW: File size limitation question

2004-03-02 Thread Salak Juraj
 How does TSM handle files that are larger then the tape size?  
From my observation, files are split into as many volumes as needed,
this is true probably not for tapes only but for all device classes.
I had already files split over 3 DLT tapes and it is very common 
to have files split over 2 tapes.

Looks like you had another problem maybe with tape drives or drivers.
You may want to turn tsm tracing on to learn in detail about 
tape I/O´s causing ANR8301E.

Juraj



-Ursprüngliche Nachricht-
Von: French, Michael [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 02. März 2004 09:06
An: [EMAIL PROTECTED]
Betreff: File size limitation question


How does TSM handle files that are larger then the tape size?  I have a DB2
dump file that is 240GB that I am backing up as a flat file.  I can get it
into the disk pool without a problem, but I can't seem to get it to migrate
to tape, a few small files move and then it seems to work on the large file
for an hour or two, and then spit the tape out but not move any data.  A
little info about my setup:

TSM 5.2.2 on Solaris 8
IBM 3494 Library with 3590E tape drives

I am using IBM K tapes that are 80GB uncompress and somewhere around
120-140Gb compressed.  From the actlog:

03/02/04   07:12:48  ANR8341I End-of-volume reached for 3590 volume
2C0448.
  (PROCESS: 4)
03/02/04   07:12:49  ANR0515I Process 4 closed volume 2C0448. (PROCESS:
4)
03/02/04   07:12:49  ANR8301E I/O error on library IDS02ATL1
(OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:50  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:50  ANR1405W Scratch volume mount request denied - no
scratch
  volume available. (PROCESS: 4)
03/02/04   07:12:50  ANR8301E I/O error on library IDS02ATL1
(OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:51  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:51  ANR1405W Scratch volume mount request denied - no
scratch
  volume available. (PROCESS: 4)
03/02/04   07:12:52  ANR8301E I/O error on library IDS02ATL1
(OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:52  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:52  ANR1405W Scratch volume mount request denied - no
scratch
  volume available. (PROCESS: 4)
03/02/04   07:12:52  ANR8336I Verifying label of 3590 volume 2C0448 in
drive
  1ST (/dev/rmt/1st). (PROCESS: 4)
03/02/04   07:12:53  ANR8301E I/O error on library IDS02ATL1
(OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:53  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:53  ANR1405W Scratch volume mount request denied - no
scratch
  volume available. (PROCESS: 4)
03/02/04   07:12:54  ANR8301E I/O error on library IDS02ATL1
(OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:54  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:54  ANR1405W Scratch volume mount request denied - no
scratch


I added a few scratch tapes and it just seems to go through the same
process:

1. Grab the scratch and load it for migration
2.  Write to it for awhile:
   5 MigrationDisk Storage Pool BACKUPPOOL,
Moved Files: 19,
   Moved Bytes: 20,480,
Unreadable Files: 0,
   Unreadable Bytes: 0. Current
Physical File
   (bytes): 211,758,833,664
Current output volume:
2C0448.
3.  Error out with ANR8301E I/O error on library IDS02ATL1
(OP=004C6D31,SENSE=00.00.00.67). 
4.  Remove the tape from the prime pool and back to scratch
5.  Repeat the cycle.

This is a new server that I have setup so I might have something set wrong
though I didn't have any problems running a DB backup to tape.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


AW: TSM DB cleanup

2004-03-02 Thread Salak Juraj
1) halt
2) dsmserv auditb

regards
juraj


-Ursprüngliche Nachricht-
Von: Levi, Ralph [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 02. März 2004 16:09
An: [EMAIL PROTECTED]
Betreff: TSM DB cleanup


I am running TSM 4.2.1 (upgrading in a month after our next disaster
recovery test) and noticed that our DB is now 95% full.  I don't want to
have to expand it before the upgrade.  Is there a utility that can be
run to make sure it is clean ?

Thanks,
Ralph


AW: archive to tape ???

2004-03-02 Thread Salak Juraj
Hi,

 But, what about my idea to _archive_ from the disk array to tape?  Is
 that not doable?  What are the flaws in this idea?  Comments?

What do you mean by archiving from disks?

What do you not like by backup storage pools?
Once you can afford moving tapes physically offsite, 
this works great.


I would suggest you describe less what others suggested,
but what your business needs are:

- what is your catastrophic failure (TSM failure? OS failure? 
Total TSM HW failure? Whole server room destroyed but network
still working? Whole server room including routers, switches, 
cables burned down? 
Whole building burned out including all safes and their content?
Both primary and secondary site burned down (twin towers case)?

- what shall work how soon after the catastrophic failure?
Is it enough TSM Server be available 6 hours after failure?
Or must be all restores of all file servers totaling
million files and 1 TB done 6 hours 
after failure?

- how old may be the restored data? Day? Two? Week? An hour?

- can you afford moving tapes physically offsite? How often?

- what online connection to your second site you have?

- what TSM Hardware do you currently have?




Such kind of information will make it easier for forum members to give you a
sound advice.


best regards
juraj




-Ursprüngliche Nachricht-
Von: Michael D Schleif [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 02. März 2004 15:45
An: [EMAIL PROTECTED]
Betreff: Re: archive to tape ???


* Steve Harris [EMAIL PROTECTED] [2004:03:02:16:18:51+1000]
scribed:
 Weird requirement.

Yes.

 Not something that I'd recommend. And I don't see the logic for having
 only part of the data, but, its an intellectual challenge as to how
 this can be done.

Their design is a bit more complex than I originally posted.  They have
a second data center (DC), and there, a second TSM using a second disk
array.
TSM#1 in the main DC#1 is supposed to replicate itself in TSM#2 at DC#2.
DC#2 is supposed to house failover servers for all critical servers at
DC#1.  In the event of catastrophic failure at DC#1, TSM#2 (and DRM#2?)
are supposed to recover to these failover servers at DC#2, and all will
be back online in a few hours.  I am not yet privy to the reality of
this setup, and I do not believe that this is fully functional as I
write this; but, that is their idea.

Also, they have already spent alot of money, and a parade of consultants
precede me.  They need to minimize cost to whatever they do that they
are not already doing.  I hope to demonstrate my value by implementing
a sound, and simple, and inexpensive tape solution -- then, I may have
opportunity to get them to question their overall strategy.

 Try this
 
 Set up a random diskpool big enough to hold one nights backup.  Point
backup at this. 
 Set up a  main  sequential file diskpool. Make this the nextstg of the
nightly pool with manually controlled migration between the two.
 Each day, run a backup stg from the nightly pool to the tape pool and send
the tapes off site.  Then migrate the nightly pool  to the main pool.
 Script a tape return process keyed on the state and update date of the
drmedia table.
 When the tapes come back, run a delete vol discardd=yes on them.
snip /

OK.  Thank you, for your ideas.

But, what about my idea to _archive_ from the disk array to tape?  Is
that not doable?  What are the flaws in this idea?  Comments?


  [EMAIL PROTECTED] 02/03/2004 13:05:26 
snip /

 The client says that they want to copy daily to tape only the most
 recent version of files that have changed since previous day.
 
 They will accept copy daily to tape all most recent file versions.
 
 Each morning, those tapes last written will be taken offsite, and tapes
 from seven (7) days ago brought back onsite and available.
 
 Furthermore, there are two (2) offsite locations, one for Windows
 platforms, and one for *NIX platforms.
 
 
 I am thinking that this can be accomplished by _archiving_ from the
 arrays to tape.  I am not clear how to specify policy.  Any ideas?
snip /

-- 
Best Regards,

mds
mds resource
877.596.8237
-
Dare to fix things before they break . . .
-
Our capacity for understanding is inversely proportional to how much
we think we know.  The more I know, the more I know I don't know . . .
--


AW: AW: TSM DB cleanup

2004-03-02 Thread Salak Juraj
Glen, your points are sound.

I still see it more differentiated.

I run 4 ~ years ago an audit just on occasion, 
just as Ralph wants to.
It found a sleeping inconsistency.
This resulted in more and more work, in an opened PMR, 
even in sending a DB dump to the tivoli labors
... only to figure out my hardware was faulty
and I had to replace it.

In detail, both the interrupt controller on motherboard
and raid controller had occasionaly problems 
to handle shared interrupts, causing scsi disk writes
to write wrong data(!) to disks. Raid5 was of no help.
This error happend only once a week or so, causing faulty DB writes
to happen maybe quarterly. 
Needles to say, few files in disk storage pools had bit failures as well.
And yes, the server and all componets used
were suposed to be of high quality.

Well, this is like like an anti-LOTTO first price win,
(LOTTO known in US? you win if 5 correct guessed from 45)
but still I cannot be the only who had had such problems - 
think about CRC options added some times later to TSM.
So If one can afford time to do an audit, maybe with fix=no, 
it will do no harm at all (except for irritating mirriads of mirriads of
elektrons)
and make him know everything is OK.

This is my personal experience - 
there is difference between believeing 
everythyng is OK and between knowing it. 

Maybe weird? I do not carry.

Juraj Salak








-Ursprüngliche Nachricht-
Von: Glen Hattrup [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 02. März 2004 18:10
An: [EMAIL PROTECTED]
Betreff: Re: AW: TSM DB cleanup


Ralph,

Without knowing why or what your exact concerns are with the TSM database, 
it is hard to give you generic advice to follow.

Unless you are experiencing internal errors, diagnostic messages, or other 
oddities that clearly indicate there is something wrong with your TSM 
server, you do NOT need to do anything specific to 'verify' the database 
before upgrading your system.

An Unload / Load would only reorganize your database.  It does not 'clean' 
your database or verify that your database is 'clean'.  The Unload / Load 
process has more of an effect on performance than space allocation. Search 
the list archives and you'll find differing opinions regarding short and 
long term benefit of an unload / load process.  I don't think it's needed 
in your case.

Please do NOT audit your database as was suggested below.  Depending upon 
the size of your database, you may end up wasting a significant amount of 
time (days, weeks?).  DB Audit is a 'computationally intensive' process 
(to put it mildly) and it is not clear that you'll see any benefit from 
doing so.   There are very few cases where the blanket response of audit 
your database is appropriate.

There are some known problems during upgrade from 4.2.x that depend upon 
your system and backup environment.  The monthly FAQ posts answers to the 
general how do I upgrade question, and the list archive is another 
excellent source to search.  Always read the README's, and always take a 
full database backup before upgrading your TSM environment.

Glen Hattrup
IBM - Tivoli Systems
Tivoli Storage Manager




Salak Juraj [EMAIL PROTECTED] 
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
03/02/2004 09:44 AM
Please respond to
ADSM: Dist Stor Manager


To
[EMAIL PROTECTED]
cc

Subject
AW: TSM DB cleanup






1) halt
2) dsmserv auditb

regards
juraj


-Ursprüngliche Nachricht-
Von: Levi, Ralph [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 02. März 2004 16:09
An: [EMAIL PROTECTED]
Betreff: TSM DB cleanup


I am running TSM 4.2.1 (upgrading in a month after our next disaster
recovery test) and noticed that our DB is now 95% full.  I don't want to
have to expand it before the upgrade.  Is there a utility that can be
run to make sure it is clean ?

Thanks,
Ralph


AW: data unavailable to server/insufficient mount points

2004-03-01 Thread Salak Juraj
plus 
Q DEVC
to check mountlimit

Juraj

-Ursprüngliche Nachricht-
Von: goran [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 01. März 2004 11:24
An: [EMAIL PROTECTED]
Betreff: Re: data unavailable to server/insufficient mount points


q mount to see the situation on the library


- Original Message -
From: Klaas Talsma - KTL [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, March 01, 2004 11:18 AM
Subject: data unavailable to server/insufficient mount points


Hi All,

A customer has got the following problems with a tsm restore of a
client.

In de gui: data unavailable to server and activity log messages shown
below.
There seem to be no volumes unavailable and the drives are online.

Would increasing the Maximum Mount Points Allowed value help?

What else can I check?

Thanks in advance.



03/01/04   09:09:35  ANR0403I Session 5631 ended for node FS-WEERT-GEWIS
  (WinNT).
03/01/04   09:09:42  ANE4035W (Session: 5622, Node: FS-WEERT-3)  Error
  processing 'VOL2:/APPS/fireman441/*': file
currently
  unavailable on server.
03/01/04   09:09:57  ANR0567W Retrieve or restored failed for session
5622 for
  node FS-WEERT-3 (NetWare) - insufficient mount
points
  available to satisfy the request.
03/01/04   09:09:58  ANR0406I Session 5632 started for node FS-WEERT-3
  (NetWare) (Tcp/Ip 10.254.201.92(1438)).
03/01/04   09:09:58  ANR0403I Session 5632 ended for node FS-WEERT-3
(NetWare).
03/01/04   09:10:43  ANE4035W (Session: 5622, Node: FS-WEERT-3)  Error
  processing 'VOL2:/APPS/fireman441/*': file
currently
  unavailable on server.
03/01/04   09:10:43  ANR0403I Session 5622 ended for node FS-WEERT-3
(NetWare).
03/01/04   09:17:26  ANR0406I Session 5633 started for node FS-WEERT-3
  (NetWare) (Tcp/Ip 10.254.201.92(1448)).
03/01/04   09:20:30  ANR0567W Retrieve or restored failed for session
5633 for
  node FS-WEERT-3 (NetWare) - insufficient mount
points
  available to satisfy the request.
03/01/04   09:20:31  ANR0406I Session 5634 started for node FS-WEERT-3
  (NetWare) (Tcp/Ip 10.254.201.92(1454)).
03/01/04   09:20:31  ANR0403I Session 5634 ended for node FS-WEERT-3
(NetWare).


--
Bell Microproducts Europe BV

This message is intended only for the use of the person(s) (\the intended
recipient(s)\) to whom it is addressed.  It may contain information which
is privileged and confidential within the meaning of applicable law.  If you
are not the intended recipient, please contact the sender as soon as
possible.  The views expressed in this communication may not necessarily be
the views held by Bell Microproducts Europe BV.


AW: Strange RETRIEVE behaviour

2004-02-27 Thread Salak Juraj
Hi,

I allow myself to disagree with the incongruous with statement.

In fact, since it´s beginings has ADSM/TSM learned
some new functionality going in the direction of higher-level 
naming of backups and handling one backup instance as single entity.
Most notably virtualmountpoint, backup sets, UNC naming support.
Up to some extent various improvements in archive/retrieve, introduction of
point-in-time restore 
and introduction of backup groups.

This are major paradigm changes compared to adsm v1.x
where in many cases the tsm user had to cope with single datasets
(for those who do not know
the old ADSM: this were simply single version of unique files)

I can imagine many various ways how to extend TSM functionality so 
to simplifiy handling of file ressources 
beeing moved to new locations
and to simplifie organisational issues 
with medium-to-long-term backups featuring stuff changes
while keeping the TSM´s file system character.


Just 2 examples of many imaginble:

1)while using existing capabilities of TSM 
run backups against shared ressources, 
like dsm incr \\server\\ressource\*.
This way, if the ressource is moved to another location on the same file
server (along with files it contained), 
the backup would not notice the change of physical location.
So would not the TSM stuff.

2) new TSM functionality: added description to incremental 
backups, analogous to archives:

dsm.opt:
include.file c:\applicaion1DataDirectory\...\*
description=BackupDescription#1
descriptionValidIn=allNodesInTheCurrentTSMDomain

This would require manual change of OPT file when moving the DIR1 to another
location,
but it only had to be documented once for all times 
for users of application1 
how to restore their application data.
The documentation remained valid even after the data had been moved to
another server.


Hmm?

best regards
Juraj Salak














-Ursprüngliche Nachricht-
Von: Richard Sims [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 27. Februar 2004 14:08
An: [EMAIL PROTECTED]
Betreff: Re: Strange RETRIEVE behaviour


Is there no way to avoid specifying the entire source path?

The reason I want to do this is that I have to be able to retrieve files
(from several years ago) with an automated procedure, and I don't want to
get into the hasle by finding out the source path by issueing one or more
query commands from the client side.

Well, objects have to have some name in order to be identified for
selection, and this is fulfilled by the file system path spec for the
file.  This is just basic addressing.  I gather that you have files
archived from various places in the file system: if so, then you obviously
have no recourse but to specify the path as part of the Retrieve...to
address the object you want back.  If all archived files are from the same
directory, then you can 'cd' into that directory and then retrieve by file
name, where the path is implicit in your current directory.  But either
way you are dealing with the realities of file system locationing.
I think what you're looking for is a whole new paradigm in the addressing
of file objects; but that is incongruous with the accommodations of file
system applications such as TSM.

  Richard Sims,  http://people.bu.edu/rbs


  1   2   3   >