Re: upgrade of 200 GB DB

2007-07-17 Thread Steven Harris

Kathleen

I had a client that I upgraded to 5.4 and consistently one of the TDP
for SQL  instances refuses to terminate - the backup works correctly but
the session hangs around for ever - even after the next day's schedule
has run.  But is is only on the one SQL client - others at the same
client code level have no issue.  I assume that they will terminate
after the large commtimeout value is reached, but I've since moved on
from that company and can't verify.

Regards

Steve

Steven Harris
TSM Admin
Sydney Australia



Kathleen M Hallahan wrote:

...
Of greater concern is that the performance at the new code does not seem
to be as good, in a rather vague and intermittent way.  We are seeing
behavior that suggests that it's not always releasing network
communications properly, but not all the time and not for all
communications.  We have not seen this in either of the two much smaller
environments that it's been installed on.  So if you have a lot of
connections on a daily basis, you might start seeing some impact.  We've
opened a ticket to IBM

I would love to know if you (or anyone else) see any behavior differences
over time at 5.4.

...




Re: Client connection makes TSM server neurotically talk to itself.

2007-09-13 Thread Steven Harris
Hi All
I've got some new windows nodes just defined  on two different AIX TSM
servers, and they are getting ANR8214E errors with the 127.0.0.1 address on
their overnight schedules. This is the same issue that Allen Rout posted
about on July 31 (post included below).  Now the funny thing is that I set
up a clientaction backup yesterday on these nodes, recycled the CAD Service
and the clientaction backup ran with no problem, the client then found its
next schedule and all appeared to be well, but then, last night more
ANR8214 errors.

Client 5.4.1.2 Server both AIX 5.2.4.1 (yes an upgrade is in the works)

Allen - Did you ever get a resolution to this?

Regards

Steve
Steve Harris
TSM Admin
Sydney Australia

Allen Rout posted this on July 31

>So my 'INT' TSM server has been up for working on two months.  Current
>sessions are in the 220K zone.  Yesterday, I noticed an odd pattern,
>which has evidently been happening for some time.


>07/30/07   18:00:02 ANR8214E Session open with 127.0.0.1 failed due to

>connection refusal. (SESSION: 1209)
>07/30/07   18:00:02 ANR2716E Schedule prompter was not able to contact

>client [some client name] using type 1 (127.0.0.1 1501). (SESSION: 1209)
>07/30/07   18:00:33 ANR8214E Session open with 127.0.0.1 failed due to

>connection refusal. (SESSION: 1209)
>07/30/07   18:01:03 ANR8214E Session open with 127.0.0.1 failed due to

>connection refusal. (SESSION: 1209)
>07/30/07   18:01:33 ANR8214E Session open with 127.0.0.1 failed due to

>connection refusal. (SESSION: 1209)

>Continuing every 30 seconds until midnight.

>The client in question has a perfectly normal (private) IP in its'
>node entry.

>The session quoted in the connection refusal message seems ludicrously
>low.  I'm not sure where that came from.

>Apparently the box has been doing this for months. :) Anybody have a
>clue what's happening in there?


>- Allen S. Rout


Re: Size of TDP backup

2007-09-16 Thread Steven Harris
My actlog shows

09/16/07   18:02:29  ANE4991I (Session: 948123, Node: XXX100_ORA)  TDP
Oracle
  AIX ANU0599 TDP for Oracle: (1691684): =>(xxx100_ora)
  ANU2535I File /orc9_db//df_XXX1_633463382_13458_1 =
  80740352 bytes sent (SESSION: 948123)

Maybe you could collect and parse these messages.

HTH

Steve

Steven Harris
TSM Admin
Sydney Australia




 Daad Ali
 <[EMAIL PROTECTED]>
 Sent by: "ADSM:To
 Dist Stor ADSM-L@VM.MARIST.EDU
 Manager"   cc
 <[EMAIL PROTECTED]
 .EDU> Subject
   [ADSM-L] Size of TDP backup

 16/09/2007 03:20
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Another newbie question:

  I have been asked to find out the size of an oracle backup using
RMAN/TSM.

  In my preliminary investigation, I noticed that the backup of the tdp
(unlike filesystem backup) doest NOT give a status report of how the backup
went.

  Is there anyway I can get that report?


  Thanks as always,
  Daad



-
All new Yahoo! Mail
-
Get news delivered. Enjoy RSS feeds right on your Mail page.


ITDT and library microcode updates

2007-09-27 Thread Steven Harris
Hi All

I'm doing microcode updates from AIX on several tape libraries during the
coming weeks, and I've just done the first one on a TS3500/3584.

Using the tapeutil tool's "tape drive service aids"  (because there is no
ethernet connection to the library - I know!) the library upgrade took 25
minutes to download its 6 or so megabytes.  I then used the ITDT tool to
update the drives and that only took a few minutes for each.

I don't have the luxury of a test environment.  Has anyone tried to use the
ITDT tool to upgrade their *library* microcode, and if it worked, was it
any faster than the other way?

Thanks

Steve.

Steve Harris
TSM Admin
Sydney Australia


Subfile backup > 2GB

2007-10-10 Thread Steven Harris
Hi All

I'm starting to back up a lot of VMWare disk images as for some reason the
VMware people are unwilling to install standard BA clients on their VMs and
prefer to snapshot their disk to a Windows box and back it up from there.

This would be a great application for the Subfile backup facility except
that the snapshots are over 2GB in size.

Has there been any notion to increase the size limit on subfile?  Is it
even possible ir is there some 32bit limitation in the architecture of the
facility.

Regards

Steve.

Steven Harris
TSM Admin
Sydney Australia


Re: Scheduling a full archive of data once/month

2007-10-22 Thread Steven Harris
Joni

Does the requirement explicitly say an archive? or just that there is to be
a point in time backup once a month that is kept for 7 years?

The problem with archives is that you must explicitly specify what is to be
backed up and that is not always easy to know.  One place I worked I kept a
record of the filesystems on each node, and once a month compared last
month and current filesystems and programmatically re-generated the archive
commands.

Archives are also not good from an include-exclude point of view as you
have to explicitly code an include.archive or exclude.archive whenever you
add a backup rule, and also you *may* want to exclude some files eg
databases from backup but not from archive most of the time, but not during
your monthly archives.

The better solution to most of this is a second TSM Server which does a
monthly incremental backup.  Less data backed up, less data stored, less DB
space used :)  more complexity on the client, more complexity in the server
setup and node maintenance :(

If you also have the requirement that one backup a year be kept
indefinitely ( in my case this was government healthcare and some records
needed to be kept for the life of the patient + 20 years, so the
simple-minded approach was - keep everything) there may even be a case for
a yearly incremental setup as well, although these days a backupset will
probably do the job as effectively.

Finally, one idea that I toyed with but never tried was a periodic export
backupactive server to server. with the fromdate/fromtime option.  The
mgmtclasses on the receiver would have to have the same names as the
sender, but with different retentions.

Hope this Helps

Steve

Steven Harris
TSM Admin

Sydney, Australia



 Joni Moyer
 <[EMAIL PROTECTED]
 ARK.COM>   To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] Scheduling a full archive
   of data once/month

 23/10/2007 12:49
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Hello everyone,

I have just received the task of scheduling an archive of all data from
several windows servers the first Saturday of every month for 7 years.
What is the best method of accomplishing this?  I would like to do this
from the TSM server which is an AIX 5.3 TSM 5.3.5.2 server.  Any
suggestions are greatly appreciated!


Joni Moyer
Highmark
Storage Systems, Storage Mngt Analyst III
Phone Number: (717)302-9966
Fax: (717) 302-9826
[EMAIL PROTECTED]



Re: Shrinking the SQL logs after TDP full backup

2007-10-30 Thread Steven Harris
Rick,

I don't normally take this line,  I like to be helpful, but Why is this
your problem?

We have our areas of expertise,  DBAs have theirs.  Big logs falls neatly
into theirs IMO.  Or are you in one of those one-man-band shops?

Regards

Steve

Steven Harris
TSM Admin

Sydney Australia






 Rick Harderwijk
  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] Shrinking the SQL logs
   after TDP full backup

 31/10/2007 08:09
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Hi *

I have been staring at this problem for too long now and I need it fixed.
We
have some rather large transaction logs on our MSSQL2005 servers because
some backups went wrong for some time. Now that the backups are a-ok
again, I need to shrink those logfiles, and I could do that from osql with
dbcc shrinkfile, but I'd rather make it stick - so I thought I'd put it in
as part of the TDP backup cycle. However, I cannot figure out how to do it.
I do not have powershell installed on the machines, so that is out. DMO is
a
way to go, but will be discontinued, so why bother and I just can't get it
right with SMO and VBScript - I'm no expert in VBScripting, but with the
right examples I usually can manage something that seems pretty basic to me
as lined out below...

What I need is:

1. Check what databases are on the server (as new databases are added over
time I do not wish to rewrite the script every time)
2. Get the physical names of the logfiles for each and every one of those
databases
3. Shrink the logfiles so it will free up diskspace.

@2 - I might need to add an extra check to make sure I only shrink the
logfiles of the databases that are in full recovery mode.

I'm sure people are doing this - otherwise all the transaction logs would
be
way too big - but I just cannot find the info I need.

I was hoping maybe you could help...


Cheers,

Rick


Re: Backup schedule syntax?

2007-11-07 Thread Steven Harris
Joni

When I've done this in the past I like to create a new "named" management
class for each application that is doing long term archiving, even if the
requirements are the same as an already existing class.  The reason for
this is that if requirements change for  this one app, and they suddenly
need 10 years or 3, or you need to put that to an out of library storage
pool, you can just change the one copygroup definition and the change is
made, without affecting any other app's data.

Also I don't like to name long term management classes with their retention
periods as you have done because system admins will find your class and
decide it would be a good idea to back stuff up for 7 years - and you will
have no idea that it is happening.

HTH

Steve

Steven Harris
TSM Admin
Sydney Australia

"ADSM: Dist Stor Manager"  wrote on 08/11/2007
08:39:57 AM:

> To expound on Richard's admonitions about the size of the archive,
> consider a slight modification:
>
>  objects="c:\archive7yr\,d:\archive7yr\"
>
> Have your users place files they want archived into that directory with
> the knowledge that you back it up on a monthly basis.  Then delete the
> contents of the directory.
>
> Kelly J. Lipp
> VP Manufacturing & CTO
> STORServer, Inc.
> 485-B Elkton Drive
> Colorado Springs, CO 80907
> 719-266-8777
> [EMAIL PROTECTED]
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
> Richard Sims
> Sent: Wednesday, November 07, 2007 1:26 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Backup schedule syntax?
>
> On Nov 7, 2007, at 2:05 PM, Joni Moyer wrote:
>
> > ...
> > options="'-archmc=arc2555 -subdir=yes" objects="c:\* d:\"
>
> I believe that should be:
>   options="-archmc=arc2555 -subdir=yes" objects="c:\* d:\*"
>
> Be prepared for a great increase in your TSM server space utilization
> every month, as Archive is unconditional, and so will result in the same
> old data being archived again and again.  Also, unless you have the
> right Excludes prepared, you will be archiving everything on the drives,
> which seemingly will include a lot of system files which will be
> worthless space-consumers in the archive storage pool and database.
>
> Richard Sims


Re: RES: chamado 02652 / TDP Domino em Linux

2007-11-19 Thread Steven Harris
The TDP for Domino is a sensitive beast.  One of the reasons for this is
that it uses a wrapper script that runs in every environment that domino
can run in to determine which operating system, architecture and other
relevant information.

I suggest that you re-install TDP from scratch, and pay careful attention
to all of the questions that the installation script asks you. A wrong
answer here can ruin everything.

Second, always run inder the id of the domino instance owner.

Third, never run domdsmc.  Always run the domdsmc_ command that
the installation process generates for you.  Yes there is an alias, but it
takes very little to upset that.



Steven Harris
TSM Admin
Sydney Australia.



   
 Vinicius Fraga
 <[EMAIL PROTECTED] 
 D-IT.COM.BR>   To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"  
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] RES: chamado 02652 / TDP
   Domino em Linux 
   
 20/11/2007 09:16  
 AM
   
   
 Please respond to 
 "ADSM: Dist Stor  
 Manager"  
 <[EMAIL PROTECTED] 
   .EDU>   
   
   




Hi here are some details of my problem



With TDP de Domino in Linux.



Getting the following



mail1:/opt/tivoli/tsm/client/domino/bin # su - notes

[EMAIL PROTECTED]:/opt/tivoli/tsm/client/domino/bin> ./domdsmc query domino



IBM Tivoli Storage Manager for Mail:

Data Protection for Lotus Domino

Version 5, Release 4, Level 2.0

(C) Copyright IBM Corporation 1999, 2007. All rights reserved.





ACD5130E Could not initialize the connection to Lotus Domino properly.
error=1543





IBM CAC, sent me some solutions:  (in red what i did)



Solutions:



1. Make a copy of the .profile being used by the UNIX id that runs domdsmc
(just as a backup).

RS: copy to .profile.old



2. In the .profile for the Notes user that is going to run the TDP Domino
backup, change the PATH variable to include the opt/lotus/bin directory and
the directory where the notes.ini file is located (notes data directory).
Do not include the Domino executable directory. On AIX, this would look
something like:

- export PATH=/opt/lotus/bin:/local/notesdata:$PATH:.

Note the /opt/lotus/notes/latest/ibmpow directory was removed from the
PATH.

edit  .profile

export PATH=/opt/ibm/lotus/bin:/notesdb:$PATH





3. Ensure that the Owner and Group for the domdsmc files/directory is set
to the same name as the Domino Server Notesdata files.

a) run chown -R  /*

- For Example: chown -R notes /usr/tivoli/tsm/client/domino/bin/*

b) run chown  

- For Example: chown notes /usr/tivoli/tsm/client/domino/bin

c) The permissions for the domdsmc needs to be set to include the SUID and
SGID bit for the Owner and Group (-rwsrws--x).

- For Example: chmod 6771 /usr/tivoli/tsm/client/domino/bin/domdsmc

d) chmod 755 /usr/tivoli/tsm/client/domino/bin

mail1:/home/notes # chown -R notes /opt/tivoli/tsm/client/domino/bin/*

mail1:/home/notes # chown notes /opt/tivoli/tsm/client/domino/bin/*

mail1:/home/notes # chmod 6771 /opt/tivoli/tsm/client/domino/domdsmc

mail1:/home/notes # chmod 755 /opt/tivoli/tsm/client/domino/bin/





4. Verify that the symbolic link for domdsmc in the Domino executable
directory is pointing to the domdsmc executable in the TDP install
directory. On AIX, this would look like:

- /opt/lotus/notes/latest/ibmpow/domdsmc ->
/usr/tivoli/tsm/client/domino/bin/domdsmc

Já estava ok:

mail1:/opt/ibm/lotus/notes/latest/linux # ls -ln | grep /opt

lrwxrwxrwx 1 0 0   41 Nov  7 14:16 domdsmc ->
/opt/tivoli/tsm/client/domino/ bin/domdsmc

lrwxrwxrwx 1 0 0   55 Nov  7 14:29 domdsmc_notes ->
/opt/tivoli/tsm/client/d omino/bin/domdsmc_notes/domdsmc

lrwxrwxrwx 1 0 0   41 Nov  7 14:16 dsmdomp

Re: migrating to different tape technology (2 libraries)

2007-11-21 Thread Steven Harris
Hi Geoff

Switch your DB backups to the new technology and start shipping offsite.
(run DBsnapshots to the old technology if you are a nervous nellie)

Create another copypool on the new technology.  Run a backup stg whenever
you have some spare cycles/drives until the pool is up to date. Then start
shipping the tapes and stop using the old pool.  Run delete vol on all the
old copypool volumes to free up db space.

Next make the nextstg and reclaimstg of your old primary pool point to the
new technology primary pool.  Mark the old pool read-only.  New backups and
the output of reclaims will go to the new technology.  Run mig stg on the
old primary pool whenever you have spare cycles/drives.

Finally clean up.

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia





 "Gill, Geoffrey
 L."
  ADSM-L@VM.MARIST.EDU
 Sent by: "ADSM:cc
 Dist Stor
 Manager"  Subject
 <[EMAIL PROTECTED] [ADSM-L] migrating to different
 .EDU> tape technology (2 libraries)


 22/11/2007 09:45
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






When migrating data from one type of tape to another (2 libraries), with
both types on the same TSM server, what is the best method(s) that would
move data from the old type to a new type, create offsite volumes with
the new type and remove data from both onsite and offsite old type tapes
so they can be removed and the drives and library deleted? I am guessing
there are multiple steps in order to accomplish this.



Thanks,



Geoff Gill
TSM Administrator
PeopleSoft Sr. Systems Administrator
SAIC M/S-G1b
(858)826-4062
Email: [EMAIL PROTECTED]


Re: PLEASE I NEED YOUR HELP TO DEFINE AN EXCEPTION IN A SCHEDULER

2007-12-03 Thread Steven Harris
Why do you say polling mode Robben?  That is the big advantage of prompted
- you don't have to wait for the next poll to pick up your changes.  You
certainly don't have to restart the cad or scheduler process to pick up a
schedule change, only for DSM.SYS/OPT changes.

Steven Harris
TSM Admin
Sydney Australia



   
 Robben Leaf   
 <[EMAIL PROTECTED] 
 NK.COM>To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"  
 <[EMAIL PROTECTED] Subject
 .EDU> Re: [ADSM-L] PLEASE I NEED YOUR 
   HELP TO DEFINE AN EXCEPTION IN A
   SCHEDULER   
 04/12/2007 04:07  
 AM
   
   
 Please respond to 
 "ADSM: Dist Stor  
 Manager"  
 <[EMAIL PROTECTED] 
   .EDU>   
   
   




Unfortunately, in the Define Schedule command, the DAYOFMONTH parameter
doesn't play nicely with the DAYOFWEEK parameter, according to the manuals.

One way to get around this is to define a couple of Administrative
schedules that run once a month, one that will Delete Association for the
nodes associated with the schedule, and another to Define Association for
those nodes. The clients will need to be in Polling mode to catch the
updates without having their schedulers restarted, and someone will have to
remember to update these Administrative schedules every time a node is
added to or removed from the Client schedule.

Robben Leaf




 "Wladimir
 Benavides"
 <[EMAIL PROTECTED]  To
 IA.COM.EC>ADSM-L@VM.MARIST.EDU
 Sent by: "ADSM:cc
 Dist Stor
 Manager"  Subject
 <[EMAIL PROTECTED] [ADSM-L] PLEASE I NEED YOUR HELP TO
 .EDU> DEFINE AN EXCEPTION IN A SCHEDULER


 12/03/2007 09:03
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Hi:

Please I need you name to define an exception in a scheduler.

I want to define a scheduler that has to run dialy, on Monday, Tuesday,
Wenesday, Thursday and Friday, but this scheduler don´t have to run the
first day of every month. I can define this scheduler, but I don´t know how
to define this exception.

Thanks and regards.




El servicio de Symantec Hosted Mail Security ha verificado que el mensaje
se
encuentra libre de Virus.




U.S. BANCORP made the following annotations
-
Electronic Privacy Notice. This e-mail, and any attachments, contains
information that is, or may be, covered by electronic communications
privacy laws, and is also confidential and proprietary in nature. If you
are not the intended recipient, please be advised that you are legally
prohibited from retaining, using, copying, distributing, or otherwise
disclosing this information in any manner. Instead, please reply to the
sender that you have received this communication in error, and then
immediately delete it. Thank you in advance for your cooperation.



-


Re: Vista oddness

2007-12-09 Thread Steven Harris
Well Paul  it looks like you need a low tech solution

I assume Vista has some sort of NTBACKUP equivalent?

Run your NTBACKUP of the systemstate as a precommand and put it through the
GNU split utility to give you files of say 1GB in size. do it as a script
and add a sleep at the end to give VSS time to quiesce before the TSM
backup starts.

Exclude systemstate from the TSM backup (or maybe put it in a class with
freq of 14) , but turn on subfile backup for the split backup files.

Voila - poor man's dedup.

Regards

Steve

Steven Harris
TSM Admin - Sydney Australia






 Paul Zarnowski
 <[EMAIL PROTECTED]
 >  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> Re: [ADSM-L] Vista oddness


 08/12/2007 10:32
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Joanne,

Thanks very much for the detailed response.  We really need relief on
this.  We have a couple of thousand Windows systems, and they will
eventually be upgrading to Vista.  As they do so, they will uncover
this huge problem;  A problem for them, in that their backups will
run longer and they will be storing much more data than they need;
and also a problem for the TSM server administrators as they put an
increasingly huge load on the network and TSM server infrastructure.

This solution, as it is now, is virtually unworkable for us.  The
clock is ticking, and we need relieve ASAP.  Waiting for the next
major release is too long, IMHO.  It would have been nice if this
were addressed with the initial support for Vista in TSM, but that's
water over the dam now.

Thanks for listening.

..Paul

At 11:40 AM 12/7/2007, Joanne T Nguyen wrote:
>David,
>
>You are seeing the correct behavior.  If you have the default domain
>backup, system state will be part of the backup.  On Vista, system state
>is in GB because we're backing up the windows\winsxs and
>system32\driverstore folder.  Please see the link below where MS describes
>in-box writers.  System state consists of all the bootable system state
and
>system services writers.  Though 8GB seems high.  Our testing
>shows about 5GB.
>
>http://msdn2.microsoft.com/en-us/library/aa819131.aspx
>
>For Windows 2003, TSM implements a way to back up the system files
>component of the system state only if something is changed.  So it
>is possible to backup only about 30-40 files the 2nd time and thereafter
if
>no fixes or SP were applied after the initial system state backup.
>During Vista development, we noticed some files were always changed so
>instead of spending the cycle to compare each file. which are
>in 30,000-40,000 files now, we decided to backup all the time.  This is
one
>area we will revisit.
>
>If you have vshadow tool from the MS VSS SDK, you can do "vshadow -wm2" to
>see all the files that should be part of the backup.  Please
>let me know if you have further questions.
>
>Regards,
>Joanne Nguyen
>TSM Client Development
>
>
>
>
>
>  "Tyree, David"
>  <[EMAIL PROTECTED]
>  .ORG>
To
>  Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
>  Dist Stor
cc
>  Manager"
>  <[EMAIL PROTECTED]
Subject
>  .EDU> Re: Vista oddness
>
>
>  12/07/2007 06:47
>  AM
>
>
>  Please respond to
>  "ADSM: Dist Stor
>  Manager"
>  <[EMAIL PROTECTED]
>.EDU>
>
>
>
>
>
>
>I did a backup using the GUI and selected system state along with the C:
>drive. The backup was 8 gig when it finished.
>
>I went back and did a C: drive only and it was only a few hundred meg.
>Then I did a system state only and got the 8 gig again.
>
>That system state in Vista is just crazy. I need to go back and really
>look at some of my servers and see just how big the system state backups
>are. I'll also take a close look at a few Win XP Pro desktops that I'm
>backing up and see what the numbers look like.
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
>Wanda Prather
>Sent: Friday, December 07, 2007 2:35 AM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: [ADSM-L] Vista oddness
>
>Don't know myself, but someone 

Re: Vista oddness

2007-12-09 Thread Steven Harris
yes, they will  be backed up every time,  but only the blocks that have
changed, thats why I specified subfile.

"ADSM: Dist Stor Manager"  wrote on 10/12/2007
03:03:53 PM:

> Might work...  if only I had time to work on such a thing and test it
> out...  Wouldn't the 1GB files be backed up everytime, since the
> timestamp will have changed?
>
> At 07:40 PM 12/9/2007, you wrote:
> >Well Paul  it looks like you need a low tech solution
> >
> >I assume Vista has some sort of NTBACKUP equivalent?
> >
> >Run your NTBACKUP of the systemstate as a precommand and put it through
the
> >GNU split utility to give you files of say 1GB in size. do it as a
script
> >and add a sleep at the end to give VSS time to quiesce before the TSM
> >backup starts.
> >
> >Exclude systemstate from the TSM backup (or maybe put it in a class with
> >freq of 14) , but turn on subfile backup for the split backup files.
> >
> >Voila - poor man's dedup.
> >
> >Regards
> >
> >Steve
> >
> >Steven Harris
> >TSM Admin - Sydney Australia
> >
> >
> >
> >
> >
> >
> >  Paul Zarnowski
> >  <[EMAIL PROTECTED]
> >  >
To
> >  Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
> >  Dist Stor
cc
> >  Manager"
> >  <[EMAIL PROTECTED]
Subject
> >  .EDU> Re: [ADSM-L] Vista oddness
> >
> >
> >  08/12/2007 10:32
> >  AM
> >
> >
> >  Please respond to
> >  "ADSM: Dist Stor
> >  Manager"
> >  <[EMAIL PROTECTED]
> >.EDU>
> >
> >
> >
> >
> >
> >
> >Joanne,
> >
> >Thanks very much for the detailed response.  We really need relief on
> >this.  We have a couple of thousand Windows systems, and they will
> >eventually be upgrading to Vista.  As they do so, they will uncover
> >this huge problem;  A problem for them, in that their backups will
> >run longer and they will be storing much more data than they need;
> >and also a problem for the TSM server administrators as they put an
> >increasingly huge load on the network and TSM server infrastructure.
> >
> >This solution, as it is now, is virtually unworkable for us.  The
> >clock is ticking, and we need relieve ASAP.  Waiting for the next
> >major release is too long, IMHO.  It would have been nice if this
> >were addressed with the initial support for Vista in TSM, but that's
> >water over the dam now.
> >
> >Thanks for listening.
> >
> >..Paul
> >
> >At 11:40 AM 12/7/2007, Joanne T Nguyen wrote:
> > >David,
> > >
> > >You are seeing the correct behavior.  If you have the default domain
> > >backup, system state will be part of the backup.  On Vista, system
state
> > >is in GB because we're backing up the windows\winsxs and
> > >system32\driverstore folder.  Please see the link below where MS
describes
> > >in-box writers.  System state consists of all the bootable system
state
> >and
> > >system services writers.  Though 8GB seems high.  Our testing
> > >shows about 5GB.
> > >
> > >http://msdn2.microsoft.com/en-us/library/aa819131.aspx
> > >
> > >For Windows 2003, TSM implements a way to back up the system files
> > >component of the system state only if something is changed.  So it
> > >is possible to backup only about 30-40 files the 2nd time and
thereafter
> >if
> > >no fixes or SP were applied after the initial system state backup.
> > >During Vista development, we noticed some files were always changed so
> > >instead of spending the cycle to compare each file. which are
> > >in 30,000-40,000 files now, we decided to backup all the time.  This
is
> >one
> > >area we will revisit.
> > >
> > >If you have vshadow tool from the MS VSS SDK, you can do "vshadow
-wm2" to
> > >see all the files that should be part of the backup.  Please
> > >let me know if you have further questions.
> > >
> > >Regards,
> > >Joanne Nguyen
> > >TSM Client Development
> > >
> > >
> > >
> > >
> > >
> > >  "Tyree, David"
> > >  <[EMAIL PROTECTED]
> > >  .ORG>
> >To
> > >  Sent by: "ADSM:  

TDP for Domino Upgrade procedure

2007-12-09 Thread Steven Harris
Hi All

I've currently finished rolling the 5.4.1.0 server through a major customer
and now am bringing all the clients up to date.

The TDP for SAP manual has a nice section on how to perform a migration
install which lays it all out for you.  very nice and kudos to the guys
that wrote the manual.  The TDP for Domino manual on the other hand

All that exists in the TDP for Domino manual is the original bare install
instructions,  Now the original config procedure is somewhat intricate
involving running the  dominstall script and answering a million questions
accurately.  In a multiple instance scenario this is both tedious and error
prone.

Has anyone done the TDP upgrade and can give me advice on this? Are there
any short cuts?

Envionment is AIX 5.2 and 5.3,  BA Client 5.4.1.2, Domino 6.5 , TDP for
Domino was 5.3.0.0 going to 5.4.2.0

 All hints gratefully received.

Thanks

Steve

Steven Harris
TSM Admin - Sydney Australia


Re: VCB Integration, specifically how to do incrementals

2007-12-18 Thread Steven Harris
Hi Steve

I can't help with your query, but are you using the new facilities of 5.5
client?  If so I would be most grateful if you could post your experiences
here when you have some to share.

Thanks

Steve Harris
TSM Admin, Sydney Australia

"ADSM: Dist Stor Manager"  wrote on 19/12/2007
06:45:22 AM:

> I'm working on integrating TSM into VMWare's VI3 VCB, and I'm running
> into a minor glitch.
>
> I'm thinking about doing a weekly full VCB snap, and using TSM to
> archive these - this works no problem.
>
> I want also to do a daily file-level VCB snap, and perform a TSM
> incremental on the mounted filesystem.
>
> The problem I'm having is that my script isn't going to know what
> volumes are on each vm, and because VCB uses ntfs mount points, I have
> to explicitly identify each one in my incremental backup command.
>
> Anyone have a good way of finding and parsing a list of mount points
> within a windows .cmd batch script?
>
>
>
> Thanks,
>
>
>
> Steve Schaub
>
> Systems Engineer, Windows
>
> Blue Cross Blue Shield of Tennessee
>
> 423-535-6574 (desk)
>
> 423-785-7347 (mobile)
>
>
>
> Please see the following link for the BlueCross BlueShield of
> Tennessee E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Atape Driver ANR8779E and Windows 2000

2008-01-02 Thread Steven Harris
Hi All

I have a problem I'd like your opinions on

My customer is running a Windows2000 TSM server (5.3.4.0).  Library is 3584
with 4 each LTO2 and LTO3 drives, LTO2 media.  The largest clients use
storage agents to access the tapes directly.

I'm getting errors ANR8779E Unable to open drive mtx.y.z.n error number=170

This is addressed in
http://www-1.ibm.com/support/docview.wss?rs=0&q1=ANR8779E&uid=swg21269235&loc=en_US&cs=utf-8&cc=us&lang=en

The server is Windows2000 and already is using the device driver that is
supposed to contain the fix.  There will be no later versions of device
driver for this OS so there is no hope for a direct cure.

The readme for the Atape driver states that there are two versions of the
driver,  one for TSM that allows only one open at a time and one for use
with Windows RSM that allows for multiple opens. See para 12 of
ftp://ftp.software.ibm.com/storage/devdrvr/Windows/Win2000/Latest/IBMTape.W200x.Readme.txt

Since only TSM is going to be using these drives,  is it reasonable to
install the  non-TSM version of the driver to get around the problem?  Is
anyone doing this, or developed another workaround for the ANR8779E
problem.  Alternatively, is it possible to revert to an earlier version of
the driver that doesn't exhibit thr problem.  I'm floating in an
unsupported sea here and looking for a life preserver!

Thanks

Steve

Steven Harris
TSM Admin
Sydney Australia


Re: SV: Suggestion for Archive?

2008-01-03 Thread Steven Harris
Neil, that is an interesting approach to the problem and set me thinking.

How about this -

Set up an active pool on tape for this data.
On the designated day,  synchronize the active pool , take a database
backup, eject the active pool tapes and this DBB and send them off to the
vault.
Delete the volumes in the active pool with discarddata and start the sync
again.
To restore, use a separate TSM instance,  restore the DB and then restore
the data.

This takes the resource-intensive part of creating the copy out of the
critical path and allows it to be spread over the month.  The end of month
step only has to copy one day's data.  It will use some database space, and
even the deletion of the volumes is resource intensive  but it might be
manageable.

Steven Harris
TSM Admin
Sydney Australia

"ADSM: Dist Stor Manager"  wrote on 04/01/2008
03:44:37 AM:

> One issue with a monthly/yearly backup/archive is that the changes
> that occur between events will not be captured.  If a file is
> created on March 3rd and deleted March 25th, a monthly
> backup/archive that runs on the first of each month will not capturethis
file.
>
> One method of retaining all data that is backed up nightly would be to:
> - Create the node name of the client reflecting the time period that
> data is backed up - i.e. "docserver-march08".
> - At the end of each time period, change the node name to the new
> time period "docserver-April08"
> - run a "export node docserver-march08" to tape and then put
> that tape in a safe place with associated recovery documentation.
> After the data has been successfully exported, delete all files
> associated with the exported node "docserver-march08".
> - Repeat for each time period.
>
> Your database will remain manageable.
> You will maintain daily recovery granularity.
> You MUST keep the recovery sequence documentation for exports which
> span multiple media.
> If you change media or backup platforms, you will have a bit of work
> importing and exporting to new media but, so it goes...
>
> Have a nice day,
> Neil Strand
> Storage Engineer - Legg Mason
> Baltimore, MD.
> (410) 580-7491
> Whatever you can do or believe you can, begin it.
> Boldness has genius, power and magic.
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
> Behalf Of Henrik Wahlstedt
> Sent: Thursday, January 03, 2008 10:23 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] SV: Suggestion for Archive?
>
> Hi,
>
> Everybody seems to fancy backupsets, exports and archives. I think
> Lloyd is right here suggesting normal backup under a different nodename.
> Normal monthly/yearly backups wont punish your DB so much as
> archives and you should be able to live with one TSM server instance.
> It is your Filers that will kill TSM if you try to archive them
> monthly with 10 years retention... Just test and let me know if I am
> incorrect. :-)
>
>
> //Henrik
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
> Behalf Of Lloyd Dieter
> Sent: den 3 januari 2008 16:01
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] SV: Suggestion for Archive?
>
> Christian,
>
> Is the primary tape pool collocated?  If not, I'd strongly recommend
> it (although it's probably too late to help you with this issue).  I
> collocate all of my primary sequential access pools, and control
> usage with maxscratch and collocation groups.
>
> Do you have enough space in your disk pools to do a "move nodedata"
> from tape to disk pools in preparation for the generate backupset or
> export operation?
>
> The other option I've used is to define "monthly" or "yearly" nodes,
> with different node names, schedules and retention settings.  For
> example, "node_a" for the dailys, "node_a_monthly", "node_a_yearly".
> It's a nuisance to set up on the client side, but works well once
> it's in place and avoids database bloat.
>
> I think that you're going to have issues doing many archives like
> that, especially for your anticiptade growth...even if it gets you
> past the immediate issue.
>
> Don't forget that you can set up a second TSM instance on the same
> physical box...that might help for what your'e trying to accomplish.
>
> -Lloyd
>
>
> On Thu, 03 Jan 2008 13:58:20 +0100
> Christian Svensson <[EMAIL PROTECTED]> wrote thusly:
>
> > Hi Lloyd,
> > We have tried Backupset for a wild now but we see that it takes approx
> > 3 weeks to archive all 300 nodes. If the backupset fails then do we
> > need to restart the entire job and w

Re: NetWare Client full backups running when incremental is selected

2008-01-17 Thread Steven Harris
Kevin,

Are you sure that someone hasn't set the mode to absolute in your  backup
copygroup?  Some places run a weekly full by setting the mode to absolute
via an admin schedule and then back to modified in another schedule.  If
yours does this could the "change back" have failed?

Regards

Steve

Steve Harris
TSM Admin, Sydney Australia





 "Kevin P. Kinder"
 <[EMAIL PROTECTED]
 V.GOV> To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> Re: [ADSM-L] NetWare Client full
   backups running when incremental is
   selected
 18/01/2008 04:56
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Thanks for the reply Mark. That isn't the case here - I only have three
NetWare servers that are running in a cluster - the other 20 or so are
standalone, and all are exhibiting the problem.


Kevin Kinder

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Stapleton, Mark
Sent: Thursday, January 17, 2008 2:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] NetWare Client full backups running when
incremental is selected

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kevin P. Kinder
> Server:  TSM running on z/OS 1.7
>  Just upgraded to 5.5
> My NetWare clients are now running full backups instead of the
> incrementals which are specified. It looks as if every Netware server
> that I have is doing this. Obviously, this is killing my disk and tape
> storage space. This is happening on client versions from 5.2-5.4

Are your NetWare boxes running in a clustered environment? If so, you
may have a situation where NetWare changes a file attribute on every
file on every cluster disk that gets moved from one node to another.
When the attribute gets changed, TSM sees it as a changed file and backs
it up again.

I ran across this issue a year or so ago at a customer site. I don't
know the details (the NetWare admin was fiercely protective of his
environment and wasn't forthcoming with help). If this is the case, the
issue lay with NetWare, not TSM.

--
Mark Stapleton
Berbee (a CDW company)
System engineer
7145 Boone Avenue North, Suite 140
Brooklyn Park MN 55428-1511
763-592-5963
www.berbee.com


Re: Another archive/expire query

2008-01-22 Thread Steven Harris
Angus,

Congrats to your organization for the longest disclaimer in the history of
the internet :-)

If its a whole drive,

DEL FILESPACE node_name filespace_name TYPE=ARCHIVE

on the server might be the way to go.  As its windows you may have to use
the fsid number and nametype=fsid to indicate the drive you want.

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia

"ADSM: Dist Stor Manager"  wrote on 23/01/2008
08:45:37 AM:

> I am busy helping an external supplier understand his own Tivoli
> system, which has become full.
>
> We plan to archive certain drive letters attached to a file server
> which contain data that will never change. The archive process is
> fine but what is the best way to remove references to the archived
> data from the main TSM database? Should I run a
>
> dsmc expire j:\*.*
>
> at the client? Will that expire the data immediately? How would
> subdirectories be handled? There appears to be no -subdir=yes option
> for the expire command. Alternatively, would
>
> dsmc delete backup j:\* -deltype=all
>
> get rid of the file references from the TSM database? I take it the
> deletion would get rid of the entries immediately.
>
> Sorry for the probably newbie queries. I haven't done any archiving
> before and it isn't my data ;-)
> Angus
>


Re: Another archive/expire query

2008-01-22 Thread Steven Harris
Thats what the TYPE=ARCHIVE is for.  Default is to do all types



"ADSM: Dist Stor Manager"  wrote on 23/01/2008
10:16:11 AM:

> Thanks. You should have seen my efforts to get it cut down from the
> original version ;-)
>
> I got the impression from the BAClient redbook that delete filespace
> would remove backup AND archived copies. Is that incorrect?
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
> Steven Harris
> Sent: 22 January 2008 22:54
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Another archive/expire query
>
>
> Angus,
>
> Congrats to your organization for the longest disclaimer in the history
of
> the internet :-)
>
> If its a whole drive,
>
> DEL FILESPACE node_name filespace_name TYPE=ARCHIVE
>
> on the server might be the way to go.  As its windows you may have to use
> the fsid number and nametype=fsid to indicate the drive you want.
>
> Regards
>
> Steve
>
> Steven Harris
> TSM Admin, Sydney Australia
>
> "ADSM: Dist Stor Manager"  wrote on 23/01/2008
> 08:45:37 AM:
>
> > I am busy helping an external supplier understand his own Tivoli
> > system, which has become full.
> >
> > We plan to archive certain drive letters attached to a file server
> > which contain data that will never change. The archive process is
> > fine but what is the best way to remove references to the archived
> > data from the main TSM database? Should I run a
> >
> > dsmc expire j:\*.*
> >
> > at the client? Will that expire the data immediately? How would
> > subdirectories be handled? There appears to be no -subdir=yes option
> > for the expire command. Alternatively, would
> >
> > dsmc delete backup j:\* -deltype=all
> >
> > get rid of the file references from the TSM database? I take it the
> > deletion would get rid of the entries immediately.
> >
> > Sorry for the probably newbie queries. I haven't done any archiving
> > before and it isn't my data ;-)
> > Angus
> >


Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Steven Harris
Nick

I may well have a flawed understanding here but

Set up an active-data pool
clone the domain containing the servers requiring recovery
set the ACTIVEDATAPOOL parameter on the cloned domain
move the servers requiring recovery to the new domain,
Run COPY ACTIVEDATA on the primary tape pool

Since only the nodes we want are in the domain with the ACTIVEDATAPOOL
parameter specified, will not only data from those nodes be copied?

Regards

Steve

Steven Harris
TSM Admin, SYdney Australia

"ADSM: Dist Stor Manager"  wrote on 23/01/2008
11:38:17 AM:

> For this scenario, the problem with Active Storagepools is it's a
> pool-to-pool relationship.  So ALL active data in a storagepool would be
> copied to the Active Pool.  Not knowing what percentage of the nodes on
the
> TSM Server will be restored, but assuming they're all in one storage
pool,
> you'd probably want to "move nodedata" them to another pool, then do the
> "copy activedata."  Two steps, and needs more resources.  Just doing
"move
> nodedata" within the same pool will semi-collocate the data (See Note
> below).  Obviously, a DASD pool, for this circumstance, would be best, if
> it's available, but even cycling the data within the existing pool will
> have benefits.
>
> Note:  Semi-collocated, as each process will make all of the named nodes
> data contiguous, even if it ends up on the same media with another nodes
> data.  Turning on collocation before starting the jobs, and marking all
> filling volumes read-only, will give you separate volumes for each node,
> but requires a decent scratch pool to try.
>
> Nick Cassimatis
>
> - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 07:25 PM
> -
>
> "ADSM: Dist Stor Manager"  wrote on 01/22/2008
> 01:58:11 PM:
>
> > Are files that are no longer active automatically expired from the
> > activedata pool when you perform the latest COPY ACTIVEDATA?  This
would
> > mean that, at some point, you would need to do reclamation on this
pool,
> > right?
> >
> > It would seem to me that this would be a much better answer to TOP's
> > question.  Instead of doing a MOVE NODE (which requires moving ALL of
> > the node's files), or doing an EXPORT NODE (which requires a separate
> > server), he can just create an ACTIVEDATA pool, then perform a COPY
> > ACTIVEDATA into it while he's preparing for the restore.  Putting said
> > pool on disk would be even better, of course.
> >
> > I was just discussing this with another one of our TSM experts, and
he's
> > not as bullish on it as I am.  (It was an off-list convo, so I'll let
> > him go nameless unless he wants to speak up.)  He doesn't like that you
> > can't use a DISK type device class (disk has to be listed as FILE
type).
> >
> > He also has issues with the resources needed to create this "3rd copy"
> > of the data.  He said, "Most customers have trouble getting backups
> > complete and creating their offsite copies in a 24 hour period and
would
> > not be able to complete a third copy of the data."  Add to that the
> > possibility of doing reclamation on this pool and you've got even more
> > work to do.
> >
> > He's more of a fan of group collocation and the multisession restore
> > feature.  I think this has more value if you're restoring fewer clients
> > than you have tape drives.  Because if you collocate all your active
> > files, then you'll only be using one tape drive per client.  If you've
> > got 40 clients to restore and 20 tape drives, I don't see this slowing
> > you down.  But if you've got one client to restore, and 20 tape drives,
> > then the multisession restore would probably be faster than a
collocated
> > restore.
> >
> > I still think it's a strong feature whose value should be investigated
> > and discussed -- even if you only use it for the purpose we're
> > discussing here.  If you know you're in a DR scenario and you're going
> > to be restoring multiple systems, why wouldn't you do create an
> > ACTIVEDATA pool and do a COPY ACTIVEDATA instead of a MOVE NODE?
> >
> > OK, here's another question.  Is it assumed that the ACTIVEDATA pool
> > have node-level collocation on?  Can you use group collocation instead?
> > Then maybe I and my friend could both get what we want?
> >
> > Just throwing thoughts out there.
> >
> > ---
> > W. Curtis Preston
> > Backup Blog @ www.backupcentral.com
> > VP Data Protection, GlassHouse Technologies
> >
> > -Origin

Re: Inittab and Restarting TSM instance

2008-01-23 Thread Steven Harris
Hi David

My favourite way to restart  on AIX  (ksh) is

chitab $(lsitab autosrvr | sed -e "s/once/respawn/")

and then after it has started

chitab $(lsitab autosrvr | sed -e "s/respawn/once/")


but one site I worked the guy who had set up TSM was a serious AIX nerd -
CATE and all that - and had set up the TSM Servers and multiple TSM clients
for multiple domino partitions as AIX subsystems, so you could use the AIX
startsrc, lssrc and stopsrc commands to manipulate them.  They were also in
a subsystem group so they could all be started and stopped with a single
command,  Very neat, though to make a subsystem needed a little bit of C
voodoo.

It would be nice if IBM could leverage some of its AIX expertise to add
such niceties I suppose I can dream.


Steven Harris

TSM Admin, Sydney Australia
and now AIX admin is strictly verboten...

> James R Owen wrote:
> >
> > I'm also looking for advice: how best to make an inoperative AUTOSRVR
> > entry in /etc/inittab?  We leave the tiny default TSM service in the
> > installation directory for upgrade processing, and never want it to
> > start up automatically, but the upgrade process recreates the entry if
> > it has been removed.
>
> Change the "once" to "off" in the inittab entry.  I'm not sure if the
> installation/upgrade process will change it back or not, though.
>
> But don't do this if the running dsmserv was started by that autosrvr
> entry, or init will kill the running server!
>
> (Related trick for starting the TSM server after a TSM halt without
>  rebooting or manually running the dsmserv process:  change the "2"
>  to "2a" in inittab, then use "telinit a" to make init start the TSM
>  server the same way it would at boot.)
>
> --
> Hello World.David Bronder - Systems
Admin
> Segmentation Fault ITS-SPA, Univ. of
Iowa
> Core dumped, disk trashed, quota filled, soda warm.
[EMAIL PROTECTED]


Re: ANR8963E & ANR1791W

2008-02-13 Thread Steven Harris
Andy

I have the same issue, AIX 5.3 TSM 5.4.1.0, TS3500, LTO2  but only on a
single drive.  I put this drive online and it works for a couple of hours
then the ANR8963E.
Some temp errors show in the AIX error log, but only for this drive.  The
drive tested ok but was replaced anyway.  No errors on the san,  none on
the library itself.

The only complicating factor is that TSM Server runs as non-root so san
discovery is not available.

I've been running with the drive offline for a month now, since the
afternoon it first happened and still have no clue as to where to go from
here to debug it.

All I know is if I put it back on line I get woken up sometime in the early
hours - so its staying offline for now :)

Steven Harris
TSM Admin, Sydney Australia




 Andy Huebner
 <[EMAIL PROTECTED]
 ONLABS.COM>To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] ANR8963E & ANR1791W


 13/02/2008 02:57
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






We started seeing this error after we upgrade to 5.4.2.0
ANR8963E Unable to find path to match the serial number defined for
drive XX in library IBM3494
This error causes the path to go off-line.  We update the path with
AUTODETECT=YES and it fixes the path every time.  But the error will
happen again and again... but to a random drive.

IBM asked us to turn on SAN Discovery to resolve the error which has
generated this error:
ANR1791W HBAAPI wrapper library /usr/lib/libHBAAPI.a(shr_64.o) failed to
load or is missing.

We are running AIX 5.3 and the correct version of the API is loaded.
This whole thing is starting to drive us nuts with paths dropping every
day and with IBM not having a solution yet.

Does anyone have a solution to these errors?  We are stuck and IBM is
working on it.

AIX 5.3
TSM 5.4.2.0
IBM3494
3590-E drives
A very static tape SAN

Andy Huebner



This e-mail (including any attachments) is confidential and may be legally
privileged. If you are not an intended recipient or an authorized
representative of an intended recipient, you are prohibited from using,
copying or distributing the information in this e-mail or its attachments.
If you have received this e-mail in error, please notify the sender
immediately by return e-mail and delete all copies of this message and any
attachments.
Thank you.


Failed files in summary on backup stg

2008-02-13 Thread Steven Harris
Hi All

I have noticed an interesting mismatch between summary records and the
ANR1214I message at the end of a storage pool backup.  It happens only once
every couple of days.

TSM Server 5.3.5.0 on AIX 5.3-04

tsm: 13>select * from summary where activity='STGPOOL BACKUP' and
entity='BACKPOOL -> COPYTAPE' and failed >0


  START_TIME: 2008-02-12 15:41:31.00
END_TIME: 2008-02-12 16:02:15.00
ACTIVITY: STGPOOL BACKUP
  NUMBER: 8522
  ENTITY: BACKPOOL -> COPYTAPE
COMMMETH:
 ADDRESS:
   SCHEDULE_NAME:
EXAMINED: 252759
AFFECTED: 252759
  FAILED: 8
   BYTES: 13848756224
IDLE: 0
  MEDIAW: 17
   PROCESSES: 2
  SUCCESSFUL: YES
 VOLUME_NAME:
  DRIVE_NAME:
LIBRARY_NAME:
LAST_USE:
   COMM_WAIT: 0
NUM_OFFSITE_VOLS:


8 failed objects on backups stg of BACKPOOL to COPYTAPE in summary record,
but actlog shows
12-02-2008 16:02:15  ANR1214I Backup of primary storage pool BACKPOOL to
copy
  storage pool COPYTAPE has ended.  Files Backed Up:
  252759, Bytes Backed Up: 13848756224, Unreadable
Files:
  0, Unreadable Bytes: 0. (SESSION: 1679220)
This is perplexing. There are no other relevant messages in actlog. I ran a
backup stg again manually, and all files were copied as reported in
foreground message and in summary table.  I also ran q content copied=no on
all the volumes in the source storage pool and that came up empty.


Any ideas as to where to go next?  Tech support search shows nothing
relevant.


Thanks


Steve.


Steven Harris


TSM Admin, Sydney Australia


Re: TS3500 vs 3494 strengths and weaknesses.

2008-02-13 Thread Steven Harris
Wanda wrote (in part)

>
> OTOH, the LTO4 drives will require a new library.  The TS3500 library is
my
> favorite library out there; it is even more durable than the 3494, MUCH
> faster, and a cleaner interface than the 3494 (no category codes to deal
> with).
>

Hi Wanda,

I quite like the TS3500, but it does have its limitations. Yes, I'd have to
agree that the 3494 category code system is not intutive to begin with, but
once you get your head around it it seems to work quite well, particularly
when operator intervention is spotty or irregular.  The TS3500 gives me
grief when the IO capacity is insufficient and it doesn't get immediate
operator attention.  It may well be better when ALMS is installed, but none
of my customers have felt the need to make the additional expenditure  ALMS
should be standard with an advanced library IMO.

Regards

Steve.

Steven Harris
TSM Admin, Sydney Australia


Re: Find Management Class and copy group info per filespace

2008-02-14 Thread Steven Harris
1. Use the web gui, start a restore, navigate to a sample directory and
scroll right to see the management class that each file is bound to

2. Use the dsmc command on the client and run a q backup on a sample
directory

Regards

Steve

Steven Harris
TSM Admin,  Sydney Australia




 Howard Coles
 <[EMAIL PROTECTED]
 ENTHEALTH.COM> To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] Find Management Class and
   copy group info per filespace

 15/02/2008 09:00
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






I have a question, as I've been challenged to provide this information.

I need a way to query TSM and find out which Management Class each file
space is bound to.  We have some domains that were setup with no less
than 5 Management Classes, each with copy groups that have vastly
differing retention policies.  For the Oracle, Exchange, and SQL server
backups this is easy, there's a dedicated node and only a few Domains
and MCs.  But for file spaces this gets more complicated.

So, if anyone knows of an easy (relatively speaking) SQL Query, or
series of SQL Queries they have run in the past to do this it would be
greatly appreciated.

See Ya'
Howard Coles Jr.
Sr. Systems Engineer
615-296-3416
John 3:16!


Weekend Fun! and ANR0259E

2008-02-17 Thread Steven Harris
Hi all

Critical hardware outage over the weekend (aren't they all) on failover to
second site TSM wouldn't come up

080215-172046:  ANR7800I DSMSERV generated at 17:00:49 on Mar 15 2007.
080215-172046:
080215-172046:  Tivoli Storage Manager for AIX-RS/6000
080215-172046:  Version 5, Release 3, Level 5.0
080215-172046:
080215-172046:  Licensed Materials - Property of IBM
080215-172046:
080215-172046:  (C) Copyright IBM Corporation 1990, 2006.
080215-172046:  All rights reserved.
080215-172046:  U.S. Government Users Restricted Rights - Use,
duplication or di sclosure
080215-172046:  restricted by GSA ADP Schedule Contract with IBM
Corporation.
080215-172046:
080215-172046:  ANR0900I Processing options file
/autsm11/local/etc/dsmserv.opt.
080215-172102:  ANR4726I The ICC support module has been loaded.
080215-172103:  ANR7821E Error reading from standard input.
080215-172103:  ANR0990I Server restart-recovery in progress.
080215-172104:  ANR0259E Unable to read complete restart/checkpoint
information from any
080215-172104:  database or recovery log volume.
080215-172104:  DSMULOG Terminating.

Now it turns out that someone had run some dsmfmt commands to format some
log copies, but these had never been added to the instance. I only figured
out what had happened by running ls -l on everything in the dsmserv.dsk
file.

The fix was to delete the offending files from dsmserv.dsk and restart and
all was well.

But,  the ANR0259E message implies that it checks all the DB and log
volumes.  There were four good logs and four good DB volumes, and the "bad"
log volumes had been formatted but never used, so that should be easy to
check.

Can anyone enlighten me as to what the REAL startup check is?

Regards

Steve


Steven Harris
TSM Admin, Sydney, Australia


Exchange monthly yearly

2008-03-09 Thread Steven Harris
Hi All

I have a multinational client (= worldwide standards and totally
inflexible) who wants monthly backups - kept for a year - and yearly
backups - kept for 10 years - of their exchange server.

This is the first time I've had this requirement for exchange and am
looking for some best practices.

I can see the COPY backups, and they might be useful to do a monthly, and
I'm considering an an annual export of the appropriate monthly filespace
with filedata=backupactive to handle the yearly requirement.

Am I on the right track?

Thanks

Steve.

Steven Harris
TSM Admin
Sydney Australia


Re: backing up a client direct to tape

2008-03-12 Thread Steven Harris
Mario

Run a selective if you like, but there is no real need to.  Then run a
generate backupset on the node.  This will put all of the active data on to
a tape.

Print out the detail of the backupset (q backupset) and send it with the
tape to the customer.

Regards

Steve.

Steven Harris
TSM Admin, Sydney Australia




 Mario Behring
 <[EMAIL PROTECTED]
 OO.COM>To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] backing up a client direct
   to tape

 13/03/2008 12:28
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Hi all,

How can I perform a full backup or an archive on a client that is going to
be deactivated and send the data directly to tape. There is an IBM 3583 L18
LTO2 connected to the TSM Server, with 3 drives and 18 slots.

How can I tell TSM to use a particular tape, a scratch one that will be
sent to the customer.

Any help is appreciated.

Mario






Be a better friend, newshound, and
know-it-all with Yahoo! Mobile.  Try it now.
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ


Exchange Surgery - Help needed.

2008-03-14 Thread Steven Harris
Hi All

I have a customer with a small TSM Setup on windows , DB is only 3GB.  TSM
is 5.3.4

Last August the legal eagles decreed that all expiration was to stop, and
my predecessors did this.  We are now at breaking point with a tape library
that is about to burst, and finally someone (me) decided to do something
about it.

The majority of the data is exchange backups,  with a twice a week full,
four times a day incremental backup strategy.  Legal has finally decided
that

We need to keep all backups from the earliest we have to end of August.
>From end of August to mid Feb we can delete all incrementals and all full
backups except the last one of every month.

Now, if I had space I would rename the node  move it all to its new backup


Exchange Surgery - Help needed.

2008-03-14 Thread Steven Harris
Sorry about that - take 2


Hi All

I have a customer with a small TSM Setup on windows , DB is only 3GB.  TSM
is 5.3.4

Last August the legal eagles decreed that all expiration was to stop, and
my predecessors did this.  We are now at breaking point with a tape library
that is about to burst, and finally someone (me) decided to do something
about it.

The majority of the data is exchange backups,  with a twice a week full,
four times a day incremental backup strategy.  Legal has finally decided
that

We need to keep all backups from the earliest we have to end of August.
>From end of August to mid Feb we can delete all incrementals and all full
backups except the last one of every month.

I can see no way of doing this from the exchange client

using the dsmc client to connect I can see appropriately named files for
full  backups and incrementals when I use the "q backup {filespace}\
-subdir=y"

Attempting a delete backup of any sort or even a q backup on a fully
qualified name gives me a no files found message

tsm> q backup  {BLKVMSG001A\BLK1ASG4}\data\\BLK1ASG4A250\full
-inactive
ANS1092W No files matching search criteria were found

Is there any way around this? or can API -created files only be deleted by
the API?


Thanks

Steve

Steve Harris
TSM Admin, Sydney Australia


Re: Exchange Surgery - Help needed.

2008-03-14 Thread Steven Harris
Poor form to reply to my own post, but I couldn't sleep till I found a
solution.  Its nearly 4am now and I think I have one!

Someone suggested directly to me that I could select from the backups table
and run the unsupported "delete objects" command on each file.  I tested
that on a single file and it seemed to work, but I'm not feeling
comfortable with it for the rather large number of entries I have to
delete.

So how about this? Its like performing surgery with the space shuttle robot
arm but it might just work.

Upgrade to TSM 5.5.  This is necessary anyway.
Set up a second TSM Server on the same box.
Export the exchange data server-to-server from the old server to the new
one using the fromdate/todate parameters to give me the slice of data that
I need.
Repeat as necessary with mergefiles on the import side.
Once a slice has been copied across I can change the retentions on the old
server and expire inventory to delete that slice.
Please feel free to throw in your two cents worth if you can see a flaw in
this

I'm starting to feel like Geordy LaForge in Star Trek TNG  "If we just
route the power from the core through to the phaser array"

Steve.

__
>
> Sorry about that - take 2
>

> Hi All

> I have a customer with a small TSM Setup on windows , DB is only 3GB.
TSM is 5.3.4

> Last August the legal eagles decreed that all expiration was to stop, and
my predecessors did this.  We are now at breaking point with a tape library
that > is about to burst, and finally someone (me) decided to do something
about it.

> The majority of the data is exchange backups,  with a twice a week full,
four times a day incremental backup strategy.  Legal has finally decided
that

> We need to keep all backups from the earliest we have to end of August.
>From end of August to mid Feb we can delete all incrementals and all full
backups except the last one of every month.

>I can see no way of doing this from the exchange client

>using the dsmc client to connect I can see appropriately named files for
full  backups and incrementals when I use the "q backup {filespace}\
-subdir=y"

>Attempting a delete backup of any sort or even a q backup on a fully
qualified name gives me a no files found message

>tsm> q backup  {BLKVMSG001A\BLK1ASG4}\data\\BLK1ASG4A250\full
-inactive
> ANS1092W No files matching search criteria were found

>Is there any way around this? or can API -created files only be deleted by
the API?


>Thanks

>Steve

>Steve Harris
>TSM Admin, Sydney Australia


Re: What tells Q DRM?

2008-03-17 Thread Steven Harris
And as to what actually makes a tape change state?

Once an hour TSM runs an internal check.  Tapes change from pending to
scratch/empty when their pending period is past and the check is run.  You
have no control over it, it depends on the time of day that the TSM Server
was last brought up.

HTH

Steve

Steven Harris
TSM Admin, Sydney Australia





 Wanda Prather
 <[EMAIL PROTECTED]
 M> To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> Re: [ADSM-L] What tells Q DRM?


 18/03/2008 03:37
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Usually it takes both.

Expiration is what causes the %utilization and %reclaimable values for your
tapes to be updated.  If you aren't running reclamation, those values won't
change (unless you delete a filespace, that takes effect at once).

When space reclamation kicks in, it will process any tapes whose value has
dropped below your %reclaim threshold.  Once all the data has been moved
from the tape (for onsite tapes) or recopied to a new offsite tape (for
offsite tapes), the tape will go to EMPTY status.  EMPTY tapes show up to
DRM as VAULTRETRIEVE.

DB backup or DB snapshot tapes that have exceeded the age limit set by the
DRMDBBACKUPEXPIREDAYS parm i(see Q DRMSTATUS) will also show up as
VAULTRETRIEVE.

So if you haven't been running expiration, you haven't been running
reclamation, your reclamation thresholds are too high, or your DB backup
tapes haven't been getting made and sent offsite, those could all affect
your tapes coming back.




On 3/17/08, Paul Dudley <[EMAIL PROTECTED]> wrote:
>
> What process is it that tells Q DRM which tapes are in "vault retrieve"
> status?
>
>
>
> Is it Expiration or Space Reclamation?
>
>
>
> The reason I am asking is that we have not had any tapes in "vault
> retrieve" status for a few days.
>
>
>
>
>
> Regards
>
> Paul Dudley
>
>
>
> Senior IT Systems Administrator
>
> ANL IT Operations Dept.
>
> ANL Container Line
>
> [EMAIL PROTECTED]
>
> 03-9257-0603
>
> http://www.anl.com.au
>
>
>
>
>
>
>
>
>
>
>
>
> ANL DISCLAIMER
> This e-mail and any file attached is confidential, and intended solely to
> the named addressees. Any unauthorised dissemination or use is strictly
> prohibited. If you received this e-mail in error, please immediately
notify
> the sender by return e-mail from your system. Please do not copy, use or
> make reference to it for any purpose, or disclose its contents to any
> person.
>


Re: Off-topic: AIX p520 question

2008-03-23 Thread Steven Harris
Bill

Newer AIX machines have default serial port settings of 19200/8/1/none
rather than the traditional 9600/8/1/none.   See if that makes a difference

Steve

Steven Harris
TSM Admin, Sydney Australia

Gradually forgetting all my AIX.






 Bill Boyer
 <[EMAIL PROTECTED]
 .NET>  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] Off-topic: AIX p520
   question

 24/03/2008 12:39
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Kinda off topic.it'll be my TSM server!



Just received a new Power6 p520, connected to the serial0 console port with
my Hyperterm and turned the box on. I see all the startup messages and when
AIX is finally up.nothing. It does not give me a logon prompt. My AIX guru
is away on medical. So I reloaded AIX 6.1. Went all the way through the
install and initial configuration wizard assigning an IP address to the
Ethernet port. Only probably is.they gave me the wrong subnet mask. So now
I
can't get to it over IP and the serial connection won't give me a logon
prompt.



Any ideas how I can get a logon prompt over the serial so I can fix this
mess?? I've used the serial port on other pServers and it was usually a
matter of finding the right speed for the connection. But I can watch the
whole boot sequence and then nothing!!



I was hoping to be backing up to this TSm server by Tuesday. What's a
weekend anyway, but 2 more working days until Monday?



Any help will be appreciated. I don't' want to go through the AIX install
again just to be able to set an IP address during the initial install
wizard.



Biill Boyer


TDP for Oracle upgrade query

2008-03-27 Thread Steven Harris
Hi all

Upgrading TDP for Oracle on AIX (64 bit) from 5.3.1 to 5.4.1.  AIX is 5.2
TL10, Oracle is 10g2, TSM API is already at 5.4.1.2

The instructions in the manual are, as usual, for an initial install rather
than an upgrade.  During an initial install, all the oracle instances need
to be taken down as the libobk.a  module has to be symlinked to
/usr/lib/libobk.a which is in turn symlinked to the same file in the TDP
directory.

For an upgrade, I'm thinking that the module should change transparently
underneath the running oracle instances.

1. Is it possible to upgrade with oracle running.
2. Is it possible to refresh the updated modules in the running oracle
instance?  The slibclean command comes to mind as a means of doing this.

I am trying to avoid any outage, but if one must be done I'd prefer to do
the install, bounce a small instance to prove it works, then bounce the
other oracle instances on the box one at a time.

How do you normally do this.

Regards

Steve.

Steven Harris
Tivoli Storage Manager SME
Backup & Recovery Team
Storage Services Group
Cumberland Forest

Phone:
IBM Internal :70-75130  External:02 9407 5130
Mobile: 0422 932 065


Re: ***SPAM*** [ADSM-L] AIX/Linux Lpar, Vio Server and Lanfree

2008-03-28 Thread Steven Harris
Francisco,

How about installing a small TSM Server on one of your LPARs.  It would be
a library management client to your main TSM server and back up itself and
the other LPARs across the virtual ethernet.  The only local disk you would
need is the TSM DB and log,  backups could go straight to tape just as if
it was a storage agent..

HTH

Steve

Steve Harris
TSM Admin
Sydney Australia


   
 Remco Post
 <[EMAIL PROTECTED]>  
 Sent by: "ADSM:To
 Dist Stor ADSM-L@VM.MARIST.EDU
 Manager"   cc
 <[EMAIL PROTECTED] 
 .EDU> Subject
   Re: [ADSM-L] ***SPAM*** [ADSM-L]
   AIX/Linux Lpar, Vio Server and  
 29/03/2008 04:15  Lanfree 
 AM
   
   
 Please respond to 
 "ADSM: Dist Stor  
 Manager"  
 <[EMAIL PROTECTED] 
   .EDU>   
   
   




Francisco Molero wrote:
> Hello,
>
> We want to create several partitions controller by AIX/Linux Vio
> Server.  I am interesting in run lanfree backups and I am looking
> info about that.
>
> Questions:
>
> If  I want to run lan free backup over a Lpar controlled by Vio
> Server, need  I to define a dedicated HBA for this LPar ?
>
> Can I share a Hba for two Lpars and run lanfree, is it supported ?
>

as far as I know, not. VIO servers can only emulate/share disks, not
tapedevices. so you'll need dedicated HBAs for the LPARs that require
tape access.

> Could anybody tell me where can I find docs about AIX/Linux/Lpar/Vio
> and TSM lanfree ?
>
> Regards,
>
> Fran
>
>
>
>
> __ Enviado desde Correo
> Yahoo! Disfruta de una bandeja de entrada más inteligente.
> http://es.docs.yahoo.com/mail/overview/index.html


--

Met vriendelijke groeten,

Remco Post


Re: Top 20 (or so) misunderstood things about TSM - send me my tape

2008-04-02 Thread Steven Harris
On a related note,  there must be some sort of audit requirement floating
around out there.  Every time someone retires a server that has had
financial data on it I'm told  "we've made the final backup, send me the
tape"  and they *won't* take no for an answer.  Most often at that point
the server has been irretrievably damaged so client side manipulation is no
longer possible,

For BA client stuff its fairly easy, just generate a backupset with the
active data on it and sent it to them.  For TDPs however it is more
difficult, particularly the TDP for SAP as it keeps all its data as
archives, and normally the archives aren't set to expire at all, so you get
to keep *every* full backup that was cataloged at the time the server was
retired, and all the incrementals in between, and this can be a very large
amount of data.

One strategy is to send them a random tape that they can pretend has their
data on it, because the likelihood of  a restore from that is minimal, but
thats not my style.  On 5.4 I have been able to export archives with a
fromdate and time that is just before the last full backup.

What about other TDPs?  MSSQL, Exchange, Domino and Oracle use backups but
backupsets only work for TDPs under TSM Express, don't they? Export active
should work for those.

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia



 "Clark, Margaret"
 <[EMAIL PROTECTED]
 >  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> Re: [ADSM-L] Top 20 (or so)
   misunderstood things about TSM

 03/04/2008 11:06
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






My favorite is the customer who believes their every backup is on their own
private tape, and will we just send it to them?
- Margaret Clark, San Diego Data Processing

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Curtis Preston
Sent: Monday, March 31, 2008 2:49 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Top 20 (or so) misunderstood things about TSM

Hey there, folks!  I'm working very hard on my next book, which will have
some product-specific information in it.  I'm covering multiple products,
so I won't go TOO deep on individual products, but I'd like to do my best
to cover misunderstood or frequently asked topics for each major product.
I figured that no one would know better than this list which topics people
tend to get confused.

What topics do you think should go on that list?  (I've got my personal
preferences, but I don't want to prejudice your thoughts.)  What are the
top 5/20/30 things about TSM that you think people get wrong?

TIA

---
W. Curtis Preston
Backup Blog @ www.backupcentral.com
VP Data Protection, GlassHouse Technologies


Multiple Backup Streams with exchange.

2008-04-03 Thread Steven Harris
I continue to find new corners of this product to explore - or maybe its
grope blindly in the dark !

I have a customer with an exchange cluster TDP 5.3.3.1 backing up to a Win
2k3 server running tsm 5.3.4.  Storage agents are installed on both sides
of the cluster at 5.3.4.

Exchange backups work fine, but the mail store has grown and they are
spilling into the online day - there are 4 drives available, but backups
only use one. Maxnummp for the node is set to 4.

I've been through the TDP for Exchange and Storage Agent manuals and can
see nothing that addresses a number of parallel streams.  I've tried
searching but obviously haven't come up with the right set of keywords.

Can someone point me in the right direction?

Thanks

Steve

Steven Harris
TSM Admin, Sydney Australia.


Re: Re-Creating a ClientOption Set

2008-04-11 Thread Steven Harris
I've struck this before Charles.  I think its a referential integrity
thing,   You get a similar issue when you move a node between domains and
the schedule associations are lost.

The way to do this is to DEL CLIENTOPT whatever INCLEXCL SEQ=ALL  and then
run DEF CLIENTOPT statements to recreate them as you wish. You could fiddle
with deleting and defining individual line numbers but I find that
error-prone and unsatisfactory.  I write macros to do this and implement a
backout for all my significant changes to option sets.  TSMMANAGER has a
Client option set editor that is good for small changes but you can
export/import using a file as well, and I use the export as the basis of my
macros to satisfy the change people.

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia






 "Hart, Charles A"
 <[EMAIL PROTECTED]
 .COM>  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] Re-Creating a ClientOption
   Set

 12/04/2008 03:01
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






We are doing Server Side Incl/Excl's using Clopts.  There were
inconsistencies from TSM instance to instance so to have them consistent
we renamed the existing CLOPT "Intel" to Intel. Save", then deleted the
original "Intel" CLOPT , and then re-created the "Intel" CLOPT on the
Instances and noticed that the previous client associations to the
Original Intel CLOPT disappeared.  Is it possible that the renaming,
deleting re-creating of a CLOPT will disassociate node to that CLOPT,
even though it was re-created with the original clopt name?  I know that
the clients were associated with the Intel Clopt, but now they are
not

Always finding new things about TSM even after 7yrs!

Thx!


This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TSM server scaling/sizing for lots (>20000) nodes

2008-04-28 Thread Steven Harris
Ah Zoltan, what fun!

How about distributed TSM Servers running Linux with bags of  cheap slow
disk for the primary pools.  Copypools back up to the main TSM server,
using server to server over TCP, SCSI over IP to the tape library or where
feasible dedicated fibre, depending on the circumstances of each
distributed server,

The distributed servers are somewhat standardized and you use config
manager to keep all in synch.  Multiple points of failure sure, but each is
relatively minor impact.  Most backups and restores are local and have only
local performance impact.  Copypool backups/db backups can be trickle fed
all day reducing impact on the Campus WAN.

I'm looking forward to the final design.

Regards

Steve

Steven Harris
TSM Admin, SYdney Australia





 Zoltan
 Forray/AC/VCU
 <[EMAIL PROTECTED]>  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] TSM server scaling/sizing
   for lots (>2) nodes

 29/04/2008 04:03
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Crazy project time...

I have been asked to research the
feasability/probability/scaleability/sizing of TSM to possibly handle
backing up thousands (actually close to 20,000 or more) desktops!

Anybody have experience doing very, very large scale backups of so many
nodes?   Don't expect much in storage occupancy since we would exclude
things like MP3, etc.  It would just be the # of files/database size
(sounds like this would have to wait for V6 since DB2 should scale much
better).

Does anyone run their TSM servers, open?   Trying to
administer/register/monitor this many clients would be a nightmare.

They are talking about budgeting close to $1M, which would include more
bodies as well as the equipment/software!


Re: ANS9999E Win32 RC 1450

2008-04-30 Thread Steven Harris
Hi All

I have a new client with an enormous file server, running Windows 2003.  We
have had a string of problems, most have been fixed by upgrade to TSM
client 5.5.0.4 and OS upgrade to SP2.  We are using
MEMORYEFFICIENTBACKUP=DISKCACHE  but still get

ANSE Win32 RC 1450  as noted in
http://www-1.ibm.com/support/docview.wss?uid=swg21251903

Pointing at the Faulty OS or poor implementation doesn't solve the problem.
This is the client's main file server and hasn't yet had a clean backup.

Has anyone else seen this issue,  and developed a workaround that you'd
like to share?  All I can think of at this point is a series of backups of
various subtrees,

Thanks

Steve.

Steven Harris
TSM Admin, Sydney Australia


Fantasy TSM

2008-05-03 Thread Steven Harris

More years ago than I want to remember (ok ok mid 80's to mid 90's) I
was a CICS sysprog in an MVS mainframe shop.  Now somewhere about CICS'
21st birthday the developers decided that the old assembler code had
become unmaintainable and so to improve reliability they re-specified it
using something called Z language (from memory... and strange that it
was never heard of again if it was so good), and recoded it from
scratch.  This then became a reliable platform upon which enhancements
could be built.

TSM is about that old now.  I assume that  it was translated from
assembler to C about the time it began to be supported on multiple
platforms, and now with V6 and DB2 there is a database back end that
will allow significant DB changes with little  development cost.

So, lets assume that TSM is to be redeveloped - as-is in V 7.1 but
providing significant improvements in 7.2.

While still retaining the basic flavour of TSM, - progressive
incremental backups, storage pools, copypools - what features and
capabilities would you like to see?

Me?

On the server I'd like :
Migrate Inactive on primary storage pools.
A scratch pool concept so that TSM never lost track of a tape or its
last use, and its tape stats.
Better tape library integration - maybe an abstract library that is
mapped onto a real library
Instant creation of weekly/monthly/yearly incremental backups from the
current state of a node, by means of a logical operation on the DB, with
no new copying required.
Intelligent application of sql in commands eg cancel sess where
session_number in (select session_number from sessions where
session_type='NORMAL')
More intelligent script and macro languages.  I'd really like to see
something like Ruby embedded for this purpose.

On the Client
Ability to manipulate API objects from a command/line and gui interface
Ability to backup/restore via stdin/stdout
Subfile backup to be extended to work on huge files.
API interfaces to perl/python/TCL and supplied SWIG files for binding
other languages
Fully functional Admin API with the same interfaces as the backup API
Encouragement from IBM for third parties to develop backup interfaces
for niche/obscure databases such as Reality-X, Cache, Firebird and other
applications not well suited to file level backup eg Cyrus mailserver.
Maybe a directory of user contribs in the client distribution.
API to permit intelligent client/or server level incrementals. eg RMAN
sends entire DB, TSM sends and stores only the changed blocks.

What would you line to see?

Regards

Steve

Steven Harris
TSM Admin, Sydney, Australia


Re: two tapedrive in one library

2008-05-06 Thread Steven Harris
Hello Yudong

you need to create two libraries, both of type manual, with one drive in
each

Regards

Steve (Liu Huan to my Chinese workmates)  Harris

TSM admin, Sydney Australia





 "Yu Dong Dong
 (ITD)"
 <[EMAIL PROTECTED]>  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] two tapedrive in one
   library

 07/05/2008 01:04
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Hi,all:

 I have two old HP tapedrive, one is DAT40, one is ULTRIUM1. I
define the two drive in one library .  when I want label a dat40 tape, I
entry "label libvolume library tsm1" , but the TSM labeled a ULTRIUM1
tape  in my ULTRIUM tapedrive . I am very frustrated. How to fix it ?

 Thanks for you help~!



DONGDONG YU


Re: Tape Reclaim Question

2008-05-15 Thread Steven Harris
Richard

Thats an interesting explanation, but it does not seem to reflect reality,
I can remember a case of a tape that had a small amount of data added to it
every day, so that by the time we got to the end of the tape it was mostly
expired, and as soon as the tape went to FULL status it got reclaimed, but
not before.

Now this was a while ago and I can't be certain what version of TSM it was
- probably 5.2, but I do remember it happening that way. I wonder if this
changed in 5.3?

Steve

Steven Harris
TSM Admin, Sydney Australia






 Richard Sims
 <[EMAIL PROTECTED]>
 Sent by: "ADSM:To
 Dist Stor ADSM-L@VM.MARIST.EDU
 Manager"   cc
 <[EMAIL PROTECTED]
 .EDU> Subject
   Re: [ADSM-L] Tape Reclaim Question

 16/05/2008 10:34
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






On May 15, 2008, at 6:20 PM, Wanda Prather wrote:

> No.
> Filling tapes aren't eligible for reclamation, unless they are marked
> OFFSITE.

Actually, volumes are eligible for reclamation regardless of siteness,
or Full/Filling state.  Technote 1202254 reinforces this.  If you set
the threshold low enough, Filling type volumes will also get
selected.  Equal treatment under the law.

Richard Sims


Re: Long Term Data Retention - off topic

2008-05-16 Thread Steven Harris
Hi David,

A few years ago I was working for a state health department that had
similar sorts of retention issues and was about to retire their main
patient admin system as they moved to a new one.   In this case, even
keeping existing data was not sufficient because different rules applied to
different data.  Some of it was supposed to be kept literally forever so
that historians could get at it, some was required for 80 years so that
epidemiological studies could be made, and some had retention lengths that
depended on the life of the patient.  In opposition to that, privacy
legislation required that some data be deleted when there was no longer an
operational need for it.

After convincing them that TSM was not an appropriate vehicle, using a
reducio ad absurdum argument, I researched a little further.

The best method for long term data retention is probably flat XML files.
These are well understood and self describing, require no specialist
software to read, yet can be searched by machine when this is necessary,
There are a number of specialized XML dialects  developed for different
purposes so a complete re-invention of the wheel is not necessary.

I did not persue this to completion.  It turned out that there was a
section in the organization whose primary job was data retention : mostly
paper based, but recognizably moving into data - just think of all those
word documents and spreadsheets that also are subject to legal retention
requirements, and the problem was passed to them.

It did occur to me that there is a business opportunity for consulting on
such problems.  Just understanding the web of retention standards, which
tend to refer to other standards nested three or four levels deep is a huge
job,  then applying those standards to the data at hand is another in order
to write some code to produce the final XML.  It would however take the
sort of analytical accountant/actuary mindset to successfully do this and
that is not my style.

I hope that has given you some insight

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia




 David Longo
 <[EMAIL PROTECTED]
 TH-FIRST.ORG>  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] Long Term Data Retention


 17/05/2008 01:35
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Wanted to get some thoughts on what people are doing for
 Long Term Data Retention - specifically on obsolete applications.

Say we have an NT 4.0 system that is no longer used.  Business
owner says we need to keep for 25 years.  I know not
 practical/possible for a number of reasons.  Even if we Vmware it,
 will they support NT 4.0 for 25 years?  (Will ANYBODY support
Windows 2008 in 25 years?)

I know even if they take a DB dump and I Archive it for 25 years, if
we retrieve the file 20 years from now, who can decipher it?  There
are several systems here that people are giving hints that they want
to do this.

I have hinted that they need to take whatever data and dump it
to a text or pdf file and then I archive that.  I realize that this may
not be that simple for some applications as probably involves
more than a simple data dump or whatever.  Plus some applications
are spread across multiple servers.

So, before we have big meeting and I push the text or pdf file
idea, what are people doing for retention of data on obsolete
servers/applications?

Thanks,
David Longo



#
This message is for the named person's use only.  It may
contain private, proprietary, or legally privileged information.
No privilege is waived or lost by any mistransmission.  If you
receive this message in error, please immediately delete it and
all copies of it from your system, destroy any hard copies of it,
and notify the sender.  You must not, directly or indirectly, use,
disclose, distribute, print, or copy any part of this message if you
are not the intended recipient.  Health First reserves the right to
monitor all e-mail communications through its networks.  Any views
or opinions expressed in this message are solely those of the
individual sender, except (1) where the message states such views
or opinions are on behalf of a particular entity;  and (2) the sender
is authorized by the entity to give such views or opinions.
#


Schedmode Polling behaviour

2008-05-22 Thread Steven Harris
I've seen some behaviour with schedmode polling that I don't understand and
wonder if anyof you can shed some light on it.

One of my clients schedules a restore every day of 1 file from each of two
nodes to the TSM Server.  This is something to do with SOX compliance and
they are fanatical about it.

Server is 5.1 something on windows2000, client is 5.1.0.0  -  Yes, I know
this went out of support in 2005, and if I could walk away from this I
certainly would.

Originally the first schedule was set to run at 14:15 in a five minute
window, and the second at 14:40 in a five minute window.  Most of the other
data on this serve is SAP backups and so the files are large. The tape
library is as old as the software with two LTO1 drives, and as a result if
housekeeping was running when the first schedule ran, it woulkd wait for a
tape and the second schedule would miss,

Ok.  So after this happened again recently, I increased the window to 4
hours for each restore.  Now there was a panic because the restores were
pending.

The randomize percentage for this server is 50% and the client in question
is in polling mode. so the first restore should start somewhere between
14:15 and 16:15

Looking at the schedule log we see this is so, with the first schedule set
to start at  15:34.  Because of the panic about the schedules being in
pending status, the scheduler service was restarted and after the restart
(at 15:13) we see a randomized start time of 16:23.  This is outside the
time frame that I expect.  Now, the first restore runs at 16:23 for about
90 seconds and the second restore is scheduled.  I would expect it to run
almost straight away as it should run sometime between 14:40 and 16:40, but
it is not scheduled until 17:18.

Can anyone explain this behaviour? I'm not looking for a fix, just an
explanation as to how the schedule can be set to run outside the first half
of the window as this does not gel with the explanation in the manuals.

Regards

Steve
Steven Harris
TSM Admin, Sydney Australia


Re: Schedmode Polling behaviour

2008-05-23 Thread Steven Harris

No, I meant restore.  These are scheduled restores.

Shawn Drew wrote:

If you want to make sure the schedules don't interfere with each other,
you should probably make 2 node names and 2 schedule services so there is
no conflict.

Also, you meant "backup" instead of "restore", correct?  It was a little
confusion reading this if that isn't the case.  Assuming so... This
doesn't look like standard behavior.  Did this happen just the once? or
something repetitive? Also, when you said "the second restore is
scheduled"  did that mean the scheduler was restarted again after the
first backup?

 It look like it is paying attention to the schedule duration, but
ignoring the start time.  Looks like it is using the current time as the
start time while the schedule is in a "Pending" state.  It then chose a
point 2 hours from the current time to choose the random time.

I'd normally say to upgrade, but I suppose that's not an option.

Regards,
Shawn





Internet
[EMAIL PROTECTED]

Sent by: ADSM-L@VM.MARIST.EDU
05/22/2008 09:21 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Schedmode Polling behaviour





I've seen some behaviour with schedmode polling that I don't understand
and
wonder if anyof you can shed some light on it.

One of my clients schedules a restore every day of 1 file from each of two
nodes to the TSM Server.  This is something to do with SOX compliance and
they are fanatical about it.

Server is 5.1 something on windows2000, client is 5.1.0.0  -  Yes, I know
this went out of support in 2005, and if I could walk away from this I
certainly would.

Originally the first schedule was set to run at 14:15 in a five minute
window, and the second at 14:40 in a five minute window.  Most of the
other
data on this serve is SAP backups and so the files are large. The tape
library is as old as the software with two LTO1 drives, and as a result if
housekeeping was running when the first schedule ran, it woulkd wait for a
tape and the second schedule would miss,

Ok.  So after this happened again recently, I increased the window to 4
hours for each restore.  Now there was a panic because the restores were
pending.

The randomize percentage for this server is 50% and the client in question
is in polling mode. so the first restore should start somewhere between
14:15 and 16:15

Looking at the schedule log we see this is so, with the first schedule set
to start at  15:34.  Because of the panic about the schedules being in
pending status, the scheduler service was restarted and after the restart
(at 15:13) we see a randomized start time of 16:23.  This is outside the
time frame that I expect.  Now, the first restore runs at 16:23 for about
90 seconds and the second restore is scheduled.  I would expect it to run
almost straight away as it should run sometime between 14:40 and 16:40,
but
it is not scheduled until 17:18.

Can anyone explain this behaviour? I'm not looking for a fix, just an
explanation as to how the schedule can be set to run outside the first
half
of the window as this does not gel with the explanation in the manuals.

Regards

Steve
Steven Harris
TSM Admin, Sydney Australia


This message and any attachments (the "message") is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.





Re: TDPsql (dsmc)

2008-06-02 Thread Steven Harris
Avy

We were all at the bottom of the learning curve once, most of us multiple
times as we move into new areas of our careers, and sometimes we get stuck
and need to ask questions.But your questions usually have quite obvious
answers that are easily answered by reference to the manuals.

Please, think for a moment about the problems that you have, search the
copious documentation that is available and also trawl the internet
resources, the TSM wiki, Richard Sim's quickfacts and a google search.   If
you have done that and are still perplexed, by all means ask a question
here, but please try yourself first.  You will learn more and understand
more that way.

Respectfully

Steve.

Steven Harris
TSM Admin, Sydney Australia

"ADSM: Dist Stor Manager"  wrote on 03/06/2008
06:24:02 AM:

> Thank you all for helping me out. Thanks again.
>
> Avy Wong
> Business Continuity Administrator
> Mohegan Sun
> 1 Mohegan Sun Blvd
> Uncasville, CT 06382
> (860)862-8164
> (cell) (860)961-6976
>
>
>
>
>  Anil Maurya
>  <[EMAIL PROTECTED]
>  FI-AVENTIS.COM>
To
>  Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
>  Dist Stor
cc
>  Manager"
>  <[EMAIL PROTECTED]
Subject
>  .EDU> Re: [ADSM-L] TDPsql (dsmc)
>
>
>  06/02/2008 03:01
>  PM
>
>
>  Please respond to
>  "ADSM: Dist Stor
>  Manager"
>  <[EMAIL PROTECTED]
>.EDU>
>
>
>
>
>
>
> There are command line ( tdpsqlc.exe ) or TDP GUI  which will give ur
> answere.
> Good luck
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
> Avy Wong
> Sent: Monday, June 02, 2008 2:38 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] TDPsql (dsmc)
>
> Hello,
>   I understand if I want to find out what is being backed up on the
> OS side ..
> C:\Program Files\Tivoli\TSM\baclient>dsmc query backup *
>
> what command to run if I want to see what is being backed up on the
> TDPSQL side?
>
> Thank you for your help.
>
> Avy Wong
> Business Continuity Administrator
> Mohegan Sun
> 1 Mohegan Sun Blvd
> Uncasville, CT 06382
> (860)862-8164
> (cell) (860)961-6976


Re: Backup retention question

2008-06-05 Thread Steven Harris
Hi Bruce

IMO the best way is

  Versions Data Exists: Nolimit
  Versions Data Deleted: Nolimit
  Retain Extra Versions: 7
  Retain Only Version: 7

This guarantees that the data will be kept 7 days.  With your setup, if
someone runs a selective, or an extra incremental in the middle of the day
on a file that has changed, the last version will be pushed off.  Also, as
a bonus, expiration processing is more efficient where the versions
parameters are not in play.

HTH

Steve

Steven Harris
TSM Admin, Sydney Australia




 "Dollens, Bruce"
 <[EMAIL PROTECTED]
 S.COM> To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> [ADSM-L] Backup retention question


 06/06/2008 06:10
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






We are wanting to setup a new policy domain and management class for our
Active Directory backups.  We want to keep only 7 days of backups. If I
wanted to accomplish this would the management class look like this? (I
have a difficult time with this part of it for some reason).

   Versions Data Exists: 7
  Versions Data Deleted: 7
  Retain Extra Versions: 7
  Retain Only Version: 7

Thanks!


Re: Backup retention question

2008-06-09 Thread Steven Harris
No Tim

Its number of versions or number of days retained, whichever is *shorter*.

Regards

Steve.






 Tim Brown
 <[EMAIL PROTECTED]
 M> To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> Re: [ADSM-L] Backup retention
   question

 10/06/2008 08:29
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






You are then in essence keeping deleted versions of files forever right ?
With "Versions Data Deleted: Nolimit"

Tim

- Original Message -
From: "Steven Harris" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, June 05, 2008 8:13 PM
Subject: Re: Backup retention question


> Hi Bruce
>
> IMO the best way is
>
>  Versions Data Exists: Nolimit
>  Versions Data Deleted: Nolimit
>  Retain Extra Versions: 7
>  Retain Only Version: 7
>
> This guarantees that the data will be kept 7 days.  With your setup, if
> someone runs a selective, or an extra incremental in the middle of the
day
> on a file that has changed, the last version will be pushed off.  Also,
as
> a bonus, expiration processing is more efficient where the versions
> parameters are not in play.
>
> HTH
>
> Steve
>
> Steven Harris
> TSM Admin, Sydney Australia
>
>
>
>
> "Dollens, Bruce"
> <[EMAIL PROTECTED]
> S.COM> To
> Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
> Dist Stor  cc
> Manager"
> <[EMAIL PROTECTED] Subject
> .EDU> [ADSM-L] Backup retention question
>
>
> 06/06/2008 06:10
> AM
>
>
> Please respond to
> "ADSM: Dist Stor
> Manager"
> <[EMAIL PROTECTED]
>   .EDU>
>
>
>
>
>
>
> We are wanting to setup a new policy domain and management class for our
> Active Directory backups.  We want to keep only 7 days of backups. If I
> wanted to accomplish this would the management class look like this? (I
> have a difficult time with this part of it for some reason).
>
>   Versions Data Exists: 7
>  Versions Data Deleted: 7
>  Retain Extra Versions: 7
>  Retain Only Version: 7
>
> Thanks!
>


Re: Backup retention question

2008-06-10 Thread Steven Harris

Yes, that was the spec that Bruce Dollens originally posited.

Tim Brown wrote:

Steve

  I responded wrong I meant

  "You are deleting all backups of a file 7 days after being deleted
  from the filesystem"

  What if a user deletes a file and needs it back after 7 days.

Tim

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Steven Harris
Sent: Monday, June 09, 2008 8:30 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backup retention question


No Tim

Its number of versions or number of days retained, whichever is
*shorter*.

Regards

Steve.






 Tim Brown
 <[EMAIL PROTECTED]
 M>
To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor
cc
 Manager"
 <[EMAIL PROTECTED]
Subject
 .EDU> Re: [ADSM-L] Backup retention
   question

 10/06/2008 08:29
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






You are then in essence keeping deleted versions of files forever right
?
With "Versions Data Deleted: Nolimit"

Tim

- Original Message -
From: "Steven Harris" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, June 05, 2008 8:13 PM
Subject: Re: Backup retention question




Hi Bruce

IMO the best way is

 Versions Data Exists: Nolimit
 Versions Data Deleted: Nolimit
 Retain Extra Versions: 7
 Retain Only Version: 7

This guarantees that the data will be kept 7 days.  With your setup,


if


someone runs a selective, or an extra incremental in the middle of the


day


on a file that has changed, the last version will be pushed off.


Also,
as


a bonus, expiration processing is more efficient where the versions
parameters are not in play.

HTH

Steve

Steven Harris
TSM Admin, Sydney Australia




"Dollens, Bruce"
<[EMAIL PROTECTED]
S.COM>


To


Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
Dist Stor


cc


Manager"
<[EMAIL PROTECTED]


Subject


.EDU> [ADSM-L] Backup retention


question


06/06/2008 06:10
AM


Please respond to
"ADSM: Dist Stor
Manager"
<[EMAIL PROTECTED]
  .EDU>






We are wanting to setup a new policy domain and management class for


our


Active Directory backups.  We want to keep only 7 days of backups. If


I


wanted to accomplish this would the management class look like this?


(I


have a difficult time with this part of it for some reason).

  Versions Data Exists: 7
 Versions Data Deleted: 7
 Retain Extra Versions: 7
 Retain Only Version: 7

Thanks!


>



No virus found in this incoming message.
Checked by AVG.
Version: 8.0.100 / Virus Database: 270.2.0/1493 - Release Date: 6/9/2008 5:25 PM



Re: Backup retention question

2008-06-10 Thread Steven Harris

Sure are.

Tim Brown wrote:

Steve

  What about

  "What if a user deletes a file and needs it back after 7 days"

   are they out of luck ?

Tim
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Steven Harris
Sent: Tuesday, June 10, 2008 7:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backup retention question


Yes, that was the spec that Bruce Dollens originally posited.

Tim Brown wrote:


Steve

  I responded wrong I meant

  "You are deleting all backups of a file 7 days after being deleted
  from the filesystem"

  What if a user deletes a file and needs it back after 7 days.

Tim

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf


Of


Steven Harris
Sent: Monday, June 09, 2008 8:30 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backup retention question


No Tim

Its number of versions or number of days retained, whichever is
*shorter*.

Regards

Steve.






 Tim Brown
 <[EMAIL PROTECTED]
 M>
To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor
cc
 Manager"
 <[EMAIL PROTECTED]
Subject
 .EDU> Re: [ADSM-L] Backup retention
   question

 10/06/2008 08:29
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






You are then in essence keeping deleted versions of files forever


right


?
With "Versions Data Deleted: Nolimit"

Tim

- Original Message -
From: "Steven Harris" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, June 05, 2008 8:13 PM
Subject: Re: Backup retention question





Hi Bruce

IMO the best way is

 Versions Data Exists: Nolimit
 Versions Data Deleted: Nolimit
 Retain Extra Versions: 7
 Retain Only Version: 7

This guarantees that the data will be kept 7 days.  With your setup,



if



someone runs a selective, or an extra incremental in the middle of


the


day



on a file that has changed, the last version will be pushed off.



Also,
as



a bonus, expiration processing is more efficient where the versions
parameters are not in play.

HTH

Steve

Steven Harris
TSM Admin, Sydney Australia




"Dollens, Bruce"
<[EMAIL PROTECTED]
S.COM>



To



Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
Dist Stor



cc



Manager"
<[EMAIL PROTECTED]



Subject



.EDU> [ADSM-L] Backup retention



question



06/06/2008 06:10
AM


Please respond to
"ADSM: Dist Stor
Manager"
<[EMAIL PROTECTED]
  .EDU>






We are wanting to setup a new policy domain and management class for



our



Active Directory backups.  We want to keep only 7 days of backups. If



I



wanted to accomplish this would the management class look like this?



(I



have a difficult time with this part of it for some reason).

  Versions Data Exists: 7
 Versions Data Deleted: 7
 Retain Extra Versions: 7
 Retain Only Version: 7

Thanks!







No virus found in this incoming message.
Checked by AVG.
Version: 8.0.100 / Virus Database: 270.2.0/1493 - Release Date:


6/9/2008 5:25 PM
>



No virus found in this incoming message.
Checked by AVG.
Version: 8.0.100 / Virus Database: 270.2.0/1493 - Release Date: 6/9/2008 5:25 PM



Re: Bad volume rule of thumb

2008-06-11 Thread Steven Harris

Hi Nicholas

You don't say what your tape technology is.  I used to admin a site that
had LTO1 and LTO3 drives.  With the LTO1s I would run move data on any
tape in error, and if that worked record the volser of the tape.  After
three errors, or if the move data didn't work I would discard the tape.
For the LTO3s I discovered that any error tended to be permanent.

I currently work a lot with LTO2s and they fail similarly to LTO1s, but
more of them fail hard on the first error.

Regards

Steve.

Steven Harris
TSM Admin, Sydney Australia

Nicholas Rodolfich wrote:

Hello All,

Thanks for your help!!

The volume inventory here is rather old here(2002). There tends to be
many read and write errors in the volume inventory. I was looking for a
rule of thumb regarding how many errors trigger disposal of these
volumes.

I have been disposing of them at the first sign of either read or write
failures but I am seeing so many, I am beginning to wonder if some level
or read/write errors is acceptable. Any thoughts on this will be
appreciated.

Nicholas


IMPORTANT NOTICE:  This message and any included attachments are from East 
Jefferson General Hospital, and is intended only for the addressee(s), and may 
include Protected Health (PHI) or other confidential information.  If you are 
the intended recipient, you are obligated to maintain it in a secure and 
confidential manner and re-disclosure without additional consent or as 
permitted by law is prohibited.   If you are not the intended recipient, use of 
this information is strictly prohibited and may be unlawful.  Please promptly 
reply to the sender by email and delete this message from your computer. East 
Jefferson General Hospital greatly appreciates your cooperation.



No virus found in this incoming message.
Checked by AVG.
Version: 8.0.100 / Virus Database: 270.2.0/1493 - Release Date: 6/9/2008 5:25 PM



Re: Method to count tapes in I/O slots of IBM3584 library

2008-07-26 Thread Steven Harris

John,

If I'm reading it correctly, the SML-MIB_112.mib file describes
structures in the library that can be walked with SNMP.
physicalMediaTable might be the one to look at, I have no access to a
library just at present so I can't test.

Alternatively, define another control path and use it just to run your
tapeutil commands.

HTH

Steve

Steven Harris
TSM Admin
Sydney Australia.

Schneider, John wrote:

Greetings,
We have an IBM3584 library.  We needed a script which would
determine how many tapes were in the I/O doors of the library, and then
send an email to the Operators to empty the  I/O doors when the library
was mostly full.  We fill the I/O door several times a day in the course
of daily processing, so we need to notify the Operators regularly.  The
way that seemed the best to do this was a script which issued:

tapeutil -f /dev/smc1 inventory -i > /tmp/tapeinventory.out

The output file is easy to parse, and count how many slots contain
media, and spit out the email accordingly.  We run the script every 30
minutes.  But now that we have been running the script a couple weeks,
several times we have gotten TSM errors like this:


07/25/08 05:35:00 ANR8965W  The server is unable to automatically
determine
   the serial number for the device.  (SESSION:
21742)
07/25/08 05:35:00 ANR8840E Unable to open device /dev/smc1 with file
handle
   11. (SESSION: 21742)

07/25/08 05:35:00 ANR8848W Drive LTO4_F1_D08 of library SUN2079 is

   inaccessible; server has begun polling drive.
(SESSION:
   21742)

07/25/08 05:35:00 ANR1794W TSM SAN discovery is disabled by options.

   (SESSION: 21742)

07/25/08 05:35:00 ANR8965W  The server is unable to automatically
determine
   the serial number for the device.  (SESSION:
21742)
07/25/08 05:35:00 ANR8840E Unable to open device /dev/smc1 with file
handle
   11. (SESSION: 21742)

07/25/08 05:35:00 ANR8848W Drive LTO4_F1_D06 of library SUN2079 is

   inaccessible; server has begun polling drive.
(SESSION:
   21742)

07/25/08 05:35:00 ANR1794W TSM SAN discovery is disabled by options.

   (SESSION: 21742)

07/25/08 05:35:00 ANR8965W  The server is unable to automatically
determine
   the serial number for the device.  (SESSION:
21742)
07/25/08 05:35:00 ANR8840E Unable to open device /dev/smc1 with file
handle
   11. (SESSION: 21742)

07/25/08 05:35:00 ANR8848W Drive LTO4_F1_D10 of library SUN2079 is

   inaccessible; server has begun polling drive.
(SESSION:
   21742)

07/25/08 05:35:00 ANR1794W TSM SAN discovery is disabled by options.

   (SESSION: 21742)


>From the timestamp we know it is exactly when our script which issues
the tapeutil command is running.  Apparently there are times when
issuing tapeutil interferes with TSM issuing commands through the
/dev/smc1 device to the library.  Perhaps tapeutil makes the device busy
for just a second?  I think that is the reason, because when I was
writing the script sometimes it would get a "device busy" error from
tapeutil, so my script had to be sensitive to that error, wait a few
seconds, and try again.

Does anybody know of a way around this?   Anybody else done something
similar.  Is there a way via IP to get this information from the library
that won't interfere with /dev/smc1?

Best Regards,

John D. Schneider
Lead Systems Administrator - Storage
Sisters of Mercy Health Systems
Email:  [EMAIL PROTECTED]


This e-mail contains information which (a) may be PROPRIETARY IN NATURE OR
OTHERWISE PROTECTED BY LAW FROM DISCLOSURE, and (b) is intended only for the
use of the addressee(s) named above. If you are not the addressee, or the
person responsible for delivering this to the addressee(s), you are notified
that reading, copying or distributing this e-mail is prohibited. If you have
received this e-mail in error, please contact the sender immediately.

No virus found in this incoming message.
Checked by AVG - http://www.avg.com
Version: 8.0.138 / Virus Database: 270.5.5/1571 - Release Date: 7/24/2008 5:42 
PM






Hardware Pricing

2008-08-07 Thread Steven Harris

Hi All

I'm trying to do a ballpark pricing of a back-of-the-envelope TSM design
for someone. Its night here and  there are no pricing resources
available.   Of course they want it tonight.

Can anyone tell me what list is for a TS3500-L53 a D53 and an IBM LTO4
tape drive?


Thanks

Steve.


Re: Recalling drmedia volumes to restore a damaged volume.

2008-08-19 Thread Steven Harris
Hi Larry

If its the mismatch between the DRM location and the real location that
bothers you,

upd vol  location=''  access=readonly

will put it back to mountable.

Regards

Steve.





 "Mcnutt, Larry
 E."
 <[EMAIL PROTECTED]  To
 KEN.COM>  ADSM-L@VM.MARIST.EDU
 Sent by: "ADSM:cc
 Dist Stor
 Manager"  Subject
 <[EMAIL PROTECTED] [ADSM-L] Recalling drmedia volumes
 .EDU> to restore a damaged volume.


 20/08/2008 07:22
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






TSM 5.4.3.0
AIX 5.3


Hello list,

I have a damaged primary storage pool volume.  Fortunately, the data is
on copy pool volumes.  I found a nicely detailed procedure for handling
the restoral of the data from mail-archive.
http://www.mail-archive.com/adsm-l@vm.marist.edu/msg25446/adsm_restore_a
_tape_volume.doc

The part that I do not see how to do "cleanly" is recalling the specific
volumes from offsite using "move drmedia" commands.  I know I can
manually add the volumes to the "vaultretrieve" list and remove them
from the "vault" inventory report.  But that doesn't feel right.  Am I
missing something?

Thanks,
Larry McNutt

-
This message and any attachments are intended for the individual or
entity named above. If you are not the intended recipient, please
do not forward, copy, print, use or disclose this communication to
others; also please notify the sender by replying to this message,
and then delete it from your system. The Timken Company / The
Timken Corporation


Re: How does VCB backups handle resource utilization?

2008-09-10 Thread Steven Harris
Hmmm

I'm willing to admit I may be talking through my hat here but...

Doesn't the VCB backup process use PROXYNODE?  So then all the backups are
funnelled through the one node, even though they have the id of another,
and so the MAXNUMMP of that node would be the limit?


Steve.
Steven Harris TSM Admin, Sydney Australia




 Wanda Prather
 <[EMAIL PROTECTED]
 M> To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> Re: [ADSM-L] How does VCB backups
   handle resource utilization?

 11/09/2008 09:16
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Well, now you've confused me, as well...

When I set RESOURCEUTILIZATION 10, I get 8 sessions, but 4 are producers
and
4 are consumers - so how did you get 8 tape mounts?



On Wed, Sep 10, 2008 at 5:54 PM, Schneider, John
<[EMAIL PROTECTED]>wrote:

> Ok, now we are getting down to the nitty gritty.  Your example
> completely contradicts what is in the Performance Guide, which provides
> a table which I reproduce below.  I hope email doesn't mess up the
> columns.  It looks correct to me, I assure you.  :-)
>
> RESOURCEUTILIZATION valueMaximum number   Unique number of
> Threshold
> of sessions  producer sessions
> (seconds)
> 1  1   0   45
> 2  2   1   45
> 3  3   1   45
> 4  3   1   30
> 5  4   2   30
> 6  4   2   20
> 7  5   2   20
> 8  6   2   20
> 9  7   3   20
> 10 8   4   10
> 0 (default)2   1   30
>
> A Resourceutil of 4 is a max of three sessions, and only one "producer"
> session, i.e. a tape mount.  A Resourceutil of 5 is required for 2 tape
> mounts, and so on.  If my client maxnummp=2, then a resourceutil of 4
> should not overrun it.
>
> UNLESS... either the manual is wrong and the algorithm is not what is
> stated.  Or does the algorithm work differently in a Lan-free client
> situation?  We have another Lan-free client in a different TSM
> environment, and we used to have a Resourceutil of 10  for a certain
> client there, and I would swear there were times when I saw 8 tape
> mounts.  (The client maxnummp must have been high enough to permit
> this).  So does Resourceutil really work like the table above, or do
> Lan-free clients or proxynode clients operate under a different set of
> rules?  Anyone able to enlighten me?
>
> Best Regards,
>
> John D. Schneider
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
> Bos, Karel
> Sent: Wednesday, September 10, 2008 11:48 AM
> To: ADSM-L@VM.MARIST.EDU <mailto:ADSM-L@VM.MARIST.EDU>
> Subject: Re: [ADSM-L] How does VCB backups handle resource utilization?
>
> It's late and it's long ago, but I seem to remember something about
> these mount point and resourceutil things in the line of:
>
> Resource 4, # mountpoint
> - 1 admin session
> - 1 mountpoint for diskpools
> - 2 mountpoint max for tape mounts
>
> So in you case, going directly to tape, you will get a max of 3
> mountpoints (because there is no diskpool) to tape.
>
>
> Regards/Met vriendelijke groet,
>
> Karel
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
> Schneider, John
> Sent: woensdag 10 september 2008 18:27
> To: ADSM-L@VM.MARIST.EDU <mailto:ADSM-L@VM.MARIST.EDU>
> Subject: Re: How does VCB backups handle resource utilization?
>
> Howard,
>  But resourceutilization is 4 now.  It should give me fewer mount
> points, not more.  So why I am overrunning the client's maximum mount
> points of 2 when resourceutilization is 4?
>  The reason I can't just increase the client's maximum mount
> points 

Re: Too many dbb being kept

2008-09-23 Thread Steven Harris

Joni

try Q DRM on each of those tapes.  They may be in  COURIERRETRIEVE
status.  If so, recall the ones that are offsite from the vault and run
MOVE DRM to make them go scratch.

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia

Richard Sims wrote:

Review the expiration eligibility rules as documented with the Set
DRMDBBackupexpiredays command relative to the output of a Query
DRMedia command.

   Richard Sims



No virus found in this incoming message.
Checked by AVG - http://www.avg.com
Version: 8.0.169 / Virus Database: 270.7.0/1685 - Release Date: 9/22/2008 4:08 
PM




Re: Export feasibility

2005-03-10 Thread Steven Harris
Of course the big gotcha with backupsets is that they don't support TDP
data.
With no disrespect intended to those who developed this aspect of the
product, they've always seemed to me more of a sales tool rather than a
really useful feature of the product, enabling the concerns of a prospect
used to inferior products to be addressed and get TSM in the door.  Once
there, there are better ways to do all this stuff.
Regards
Steve.
Steve Harris
"resting" as the actors say
Brisbane, Australia
- Original Message -
From: "Ragnar Sundblad" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 11, 2005 5:25 AM
Subject: Re: [ADSM-L] Export feasibility

Thank you Stuart for your feedback!
--On den 10 mars 2005 16:18 +1100 Stuart Lamble
<[EMAIL PROTECTED]> wrote:
On 10/03/2005, at 8:15 AM, Ragnar Sundblad wrote:
We don't want to give up storing away a complete snapshot of
our systems off site every few months, over time maybe reusing
the off site tapes so that we finally save a snapshot a year.
I think that the most logical way to accomplish this with TSM
is to do a complete export like
export server filedata=allactive ...
What's wrong with using backup sets? The end result is the same -- all
currently active data is stored on tape and can be moved offsite,
without needing any maintenance from the main TSM server.
Backup sets may be fine too. There are two issues with it that
made me think that they were not as suitable for this as an export:
- There seem to be no way to generate a backup set that contains
multiple nodes, so I take it that if I don't want one tape
per node (hundreds), I will have to generate the backup sets to
disk and find some other way to pack them on tapes.
Am I wrong here?
- They can't be imported into the storage pools again. This is
probably not really a problem, it just looks nice and feels
good to be able to move data both ways.
Given LTO 3 with an maximum data speed of 50 MB/s uncompressed,
a moderate guess (I hope) for the data transfer rate would be
30 MB/s, which would give 93 hours to write it down. Given that
a tape has a capacity of 400 GB uncompressed, it would take
at the most 25 tapes.
We would obviously want to be able to use the backup server
as usual while doing the export.
Then you would need to make sure that your server has enough capacity,
in terms of tape drives (and connectivity to the tape drives), network
bandwidth (to cover both the export and the backups), and such like to
cope with both the export (or backup set generation) and regular
backups simultaneously.
Yes, that is quite reasonable. :-)
I was just thinking that there might be some other resource
that that server could get short of, like database rollback or
something, if it was both doing an export for days and at the
same time was backing up. I hope that is not a problem, and
especially it shouldn't be if I use backup sets instead.
 From my point of view, being in the middle of fiddling around with
exports (server to server in my case) for various "last ditch" DR
systems, I'd suggest keeping it simple. If backup sets (see the
"generate backupset" command in the admin reference manual) fill your
needs, my advice would be to make use of them, rather than using the
rather fragile option of server data exports.
I very much agree, simple is good! :-)
It is interresting, but sad, to hear that you too find exports
fragile. May I ask what the problems typically are?
If I can do what I want with backup sets instead, that is
probably a better way.
Thank you again for your thoughts!
Best regards,
Ragnar Sundblad



Re: tape mount problem

2005-03-10 Thread Steven Harris
Muthy,
When you moved the tape in the library, did you do it manually or by the
library's control panel or web interface?  For a 3494, TSM audit only asks
the library for a download of its own information. So, if the library
doesn't know, TSM can't know.
You may have to physically remove the tape from the library and then run a
TSM checkin process.
Regards
Steve
Steve Harris
"resting" as the actors say
Brisbane, Australia
- Original Message -
From: "Muthyam Reddy" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 11, 2005 5:30 AM
Subject: Re: [ADSM-L] tape mount problem

Just before I finished auditing proses in the library to fix missing
tape  slot problem. Before audit proses I could see that tape when I
list tapes by 'q libvol'. After auditing library I could not find tape
in the library list (q libvol) but it is there in the library. Even when
I tried  "checkout libv libname volser rem=no checkl=no" says tape does
not exists in the library. I don't no what went wrong.
I used  'audit libr 3494lib1 checklabel=yes"
Please provide some inputs to move this forwd.
Thanks
muthyam
[EMAIL PROTECTED] 3/10/2005 2:07:54 PM >>>
Do a checkout libv libname volser rem=no checkl=no.
This will checkout the tape and it will no longer be in the q libv
output.
After that, do a checkin libv libname search=yes checkin=priv
checkl=barcode vollist=volser.
That should bring back your tape. I am not sure of the syntax of the
checkin command, it is always a pain to figure out what options fit
with
one another. I don't have a TSM server handy right now but this should
work.
Guillaume Gilbert
Systems Specialist
514.866.8876 Office
514.866.0901 Fax
514.290.6526 BlackBerry
[EMAIL PROTECTED]
StorageTek Canada
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Muthyam Reddy
Sent: March 10, 2005 12:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: tape mount problem
Thanks for your quick response.
Is there any way we can find original place where it was. Yes I know
auditing going to fix all these problems but its time consuming.
Karl, When I use 'q libvol' it does not element address and also in
volhsit file.
When I used robot backside panel to check volume location..it show
nothing.
Tape is in the library.
thanks again.
muthyam

[EMAIL PROTECTED] 3/10/2005 12:29:44 PM >>>
Yes, that is not surprising.
Whenever you move a tape or place it in a slot yourself, you should
run
a TSM AUDIT LIBRARY command.
That tells TSM to rescan the library and update the info about which
tape is in each slot.
Run the AUDIT LIBRARY from the admin command line, or from the web
GUI.
The syntax may be slightly different depending on the type of library
your have.
(When posting info to this list, PLEASE INCLUDE your TSM server
platform, version #, and type of tape library.  You'll get better
answers that way!)
AUDIT LIBRARY libname CHECKLABEL=BARCODE
The AUDIT LIBRARY will not run if there are other tape operations in
progress (like migration or reclaim).
Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Muthyam Reddy
Sent: Thursday, March 10, 2005 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: tape mount problem
Hi,
While I am trying to restore database, tape stuck in the tape drive
and
later  I unloaded tape  and kept in some slot in the library. Now when
I
am trying to restore its updating tape to unavailable and restore
fails.
I am thinking TSM  lost address of tape after I placed tape manually
in
the slot.
Is there any way I can find tape original slot in the library. Can
some
help how I am going to fix this problem.
thanks
muthyam



This electronic mail transmission contains information from Joy Mining
Machinery
which is confidential, and is intended only for the use of the proper
addressee.
If you are not the intended recipient, please notify us immediately at
the return
address on this transmission, or by telephone at (724) 779-4500, and
delete
this message and any attachments from your system.  Unauthorized use,
copying,
disclosing, distributing, or taking any action in reliance on the
contents of
this transmission is strictly prohibited and may be unlawful.


<>



Re: If we all complain, do you think they will add the WEB gui back?

2005-03-10 Thread Steven Harris
Mark,
That is a great post from you, and yes ISC is Version 1 and will improve
with time and use.
But, if you will allow me to reiterate a previous post:
There is a  new DSMAPI admin api that is used by the ISC to perform TSM
comands.  One additional way forward would be to expose this api, so that
the talented members of the user community can develop the interfaces that
*they* desire using perl/python/PHP/java or whatever.
This is an almost zero cost solution for IBM that will make life very much
easier for them and for us.  Look at the utilty of the standard TSM api.
The adsmpipe program that was written for version 2 still works well on
version 5.  Many applications have been built on top of this api and it
provides great functionality for both IBM (eg the i5OS BRMS interface) and
others.  The admin api could be as good - to see the sort of thing that is
do-able look at http://www.phpmyadmin.net/home_page/index.php , and
particlarly have a play under the demos tab.
Please IBM stop treating us like mushrooms and let in the light.
Steve
Steve Harris
TSM and AIX Admin
Between jobs at the moment
Brisbane Australia
- Original Message -
From: "Mark D. Rodriguez" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 11, 2005 4:19 AM
Subject: Re: [ADSM-L] If we all complain, do you think they will add the WEB
gui back?

Hi Everyone,
I am an IBM Business Partner.  I have been listening to what everyone
has been saying about the ISC/AC.  I, also, have some concerns about
this since I not only have to use it I have to be able to sell it to
others to use.  I have been talking with several IBM'ers in this
regard.  The people I have been talking to are on both the TSM
development side and on the channel (sales and marketing) side of the
house.  Obviously the channel people are very concerned when anything
might possibly effect the ability of IBM BP's to sell their products.
As such, I have been seeking to get them to put pressure on the
development side to get some sort of improvements made.  I have talked
with the developers to help them see the issues that I see with my
customers as well as what I have learned from all of you on this list.
Also, you should know that IBM is listening and they are willing to make
the necessary changes to resolve these issues.  They are monitoring this
list all the time so the only real survey you need to do is keep posting
to the list!
Now before I go to much further, I must make this statement (i.e. here
comes the legal disclaimer), anything that I am about to disclose here
is simply the results and/or contexts of conversation that I had with
various IBM'ers and in no way implies any commitment on their or my part
to provide any of the things we discussed.  In other words we were just
talking, but they were not promising anything.  The biggest problem I
see with the ISC/AC is not the application itself, change is inevitable
and in fact in this case somewhat overdue.  The problem with the ISC/AC
is that there is not any reasonable migration path from the Web Admin
GUI to the ISC/AC.  They just flipped a switch and now you used ISC/AC
and oh by the way it doesn't support any of your older TSM servers.  Not
a good plan and I think they recognize it as well.  However, I will
defend the developers to the point that there were very good reasons for
the decisions that they made and how we wound up where we are today.
Given similar situation I would have made similar choices with the
exception I would have spent the time and resources to have a better
migration path.  As you all have probably guessed by now the ISC/AC
isn't going away any time soon, nor should it.  We have been long
overdue for a improved GUI admin interface.  The ISC/AC isn't perfect by
any stretch of the imagination, but I have every confidence that IBM
will develop it into a very mature tool as quickly as possible.  I will
mention some of the "POSSIBLE" enhancements that are upcoming later in
this note.
The focus of my discussion with the IBM powers that be was around how do
we give the TSM community a better migration path to the ISC/AC
environment.  The key issue we focused on for creating a better
migration path was the re-release of the Web Admin GUI.  Obviously the
the best thing would be to re-release it and have it support all of the
5.3 enhancements, but that comes at a cost.  The trade off would be to
take resources away from the ISC/AC development in order to uplift the
Web Admin GUI.  I don't think that is in the best interests of the TSM
community as a whole.  I suspect that what will happen is the Web Admin
GUI will be re-released but frozen at the 5.2 level with no further
development being done to it.  My guess is it will be around through the
5.3 release and maybe little longer, like until all supported version of
the TSM server are supported on the ISC/AC.  I understand there was some
talk of making the ISC/AC backwards compatible with version 5.2 but that
did not go anywhere.  Re-releasing the Web Admin

Re: tape mount problem

2005-03-10 Thread Steven Harris
Mut,
The correct procedure for a 3494 is to place any tape moved in this way into
the recovery slot, which is the top left slot of the library manager frame.
If you watch the robot, this slot is always the first one checked when the
library goes online.  The library recognizes that this slot contains a tape
that needs to be put away and it will do so.
Regards
Steve
- Original Message -
From: "Muthyam Reddy" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 11, 2005 1:01 PM
Subject: Re: [ADSM-L] tape mount problem

Steve,I could fix the problem and what you said is right.
Lesson what I learned is ..whenever you unload tape from tapedrive do
not place in some empty slot in library. Close the doors and Place it in
the  drawer and checkin as private tape(check label=yes) then it keeps
originally where it was.
I could restore database successfully.
Than You.
mut
[EMAIL PROTECTED] 3/10/2005 7:17:28 PM >>>
Muthy,
When you moved the tape in the library, did you do it manually or by
the
library's control panel or web interface?  For a 3494, TSM audit only
asks
the library for a download of its own information. So, if the library
doesn't know, TSM can't know.
You may have to physically remove the tape from the library and then
run a
TSM checkin process.
Regards
Steve
Steve Harris
"resting" as the actors say
Brisbane, Australia
- Original Message -
From: "Muthyam Reddy" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 11, 2005 5:30 AM
Subject: Re: [ADSM-L] tape mount problem

Just before I finished auditing proses in the library to fix missing
tape  slot problem. Before audit proses I could see that tape when I
list tapes by 'q libvol'. After auditing library I could not find
tape
in the library list (q libvol) but it is there in the library. Even
when
I tried  "checkout libv libname volser rem=no checkl=no" says tape
does
not exists in the library. I don't no what went wrong.
I used  'audit libr 3494lib1 checklabel=yes"
Please provide some inputs to move this forwd.
Thanks
muthyam
[EMAIL PROTECTED] 3/10/2005 2:07:54 PM >>>
Do a checkout libv libname volser rem=no checkl=no.
This will checkout the tape and it will no longer be in the q libv
output.
After that, do a checkin libv libname search=yes checkin=priv
checkl=barcode vollist=volser.
That should bring back your tape. I am not sure of the syntax of the
checkin command, it is always a pain to figure out what options fit
with
one another. I don't have a TSM server handy right now but this
should
work.
Guillaume Gilbert
Systems Specialist
514.866.8876 Office
514.866.0901 Fax
514.290.6526 BlackBerry
[EMAIL PROTECTED]
StorageTek Canada
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf
Of
Muthyam Reddy
Sent: March 10, 2005 12:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: tape mount problem
Thanks for your quick response.
Is there any way we can find original place where it was. Yes I know
auditing going to fix all these problems but its time consuming.
Karl, When I use 'q libvol' it does not element address and also in
volhsit file.
When I used robot backside panel to check volume location..it show
nothing.
Tape is in the library.
thanks again.
muthyam

[EMAIL PROTECTED] 3/10/2005 12:29:44 PM >>>
Yes, that is not surprising.
Whenever you move a tape or place it in a slot yourself, you should
run
a TSM AUDIT LIBRARY command.
That tells TSM to rescan the library and update the info about which
tape is in each slot.
Run the AUDIT LIBRARY from the admin command line, or from the web
GUI.
The syntax may be slightly different depending on the type of
library
your have.
(When posting info to this list, PLEASE INCLUDE your TSM server
platform, version #, and type of tape library.  You'll get better
answers that way!)
AUDIT LIBRARY libname CHECKLABEL=BARCODE
The AUDIT LIBRARY will not run if there are other tape operations in
progress (like migration or reclaim).
Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf
Of
Muthyam Reddy
Sent: Thursday, March 10, 2005 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: tape mount problem
Hi,
While I am trying to restore database, tape stuck in the tape drive
and
later  I unloaded tape  and kept in some slot in the library. Now
when
I
am trying to restore its updating tape to unavailable and restore
fails.
I am thinking TSM  lost address of tape after I placed tape manually
in
the slot.
Is there any way I can find tape original slot in the library. Can
some
help how I am going to fix this problem.
thanks
muthyam




This electronic mail transmission contains information from Joy
Mining
Machinery
which is confidential, and is intended only for the use of the
proper
addressee.
If you are not the intended recipient, please notify us immediately
at
the return
address on this transmission, or by telephone at (724) 779-4500, and
del

Re: Volume sequence

2005-03-10 Thread Steven Harris
Norita,
It sounds like your management is attempting to impose a standard derived
for a less sophisticated product onto TSM.
What they want, as I read it, is an assurance that they know what is on any
given tape, in case of a disaster recovery - if that is the case, then they
need an offsite database backup, and a recovery plan file: nothing else will
do.  That way they know which offsite tape has the latest database backup
and can perform a restore if need be.
Ask them for the result that they wish to achieve, not the means to provide
it.  I realize that as a technician it is often difficult to resist the
unreasonable and unthinking demands of management, but you are the one who
is charged with running this, not them.  Be firm.
Regards
Steve.
Steve Harris
Brisbane, Australia
- Original Message -
From: "Norita binti Hassan" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 11, 2005 1:13 PM
Subject: Re: [ADSM-L] Volume sequence

Thanks.. where can we get documentation or manual on the 'Show" command?
But what my management want is a list of tapes, their nodes and the file
sequence number and the backup date From whisch tables can I retreive all
these info.
e.g
Tape Number Node Name Backup Date Seq. No
--- - ---
--
TEST001 SERVER01 2005-01-01 1
TEST001 SERVER01 2005-01-01 2
TEST002 SERVER03 2005-01-05 1
TEST003 SERVER05 2005-01-05 1
So, from this report, we will know that SERVER01  used 2 tapes and so on.
Can anybody help me??
Thank you

~Norita Hasan~
-Original Message-
From: fred johanson [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 10, 2005 10:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Volume sequence
The undocumented, but well known, "show volumeusage node_name" lists the
volumes in the order they were assigned to the client.
At 10:51 AM 3/10/2005 +0800, you wrote:
I've tried that script, but I need to know the file sequence.
Eg. I have a large database of 170GB.It should occupied more than 1 tape.
How do I know which tape is the first one, and which is next .
~Norita Hasan~
-Original Message-
From: goc [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 09, 2005 8:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Volume sequence
you can try this ... it givec a lot more and it's useful i think
select
vu.node_name,vo.status,vu.copy_type,vo.est_capacity_mb,vo.pct_utilized,vo.v
o
lume_name
 -
from volumeusage vu,volumes vo -
where vo.volume_name=vu.volume_name -
group by
vu.node_name,vo.status,vu.copy_type,vo.est_capacity_mb,vo.pct_utilized,vo.v
o
lume_name
 -
order by vo.volume_name, -
vu.node_name,vu.copy_type,vo.pct_utilized
- Original Message -
From: "Norita binti Hassan" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, March 09, 2005 11:51 AM
Subject: Volume sequence
> Hi,
>
> Is there any select statement that can display the volume sequence
number
> so
> that I can list all the volumes in sequence that has been used by
specific
> node.
>
> NORITA BINTI HASAN
> Senior Programmer
> Enterprise Systems Services
> Information Communication Tech.Div
> 6th Floor, Pos Malaysia Berhad
> 50670 Kuala Lumpur
>
> Tel : 03 - 22756638
>
>
>
> Pos Malaysia Berhad is Malaysia's national postal company
> Visit us online at www.pos.com.my
>
> NOTICE
> This message may contain privileged and/or confidential
> information. If  you are  not the addressee  or authorised
> to  receive this email, you must not use, copy,  disclose
> or take any  action based  on this email. If you  have
> received this  email in  error, please advise  the sender
> immediately by  reply e-mail and delete  this message.
> Pos Malaysia  Berhad takes  no responsibility  for  the
> contents of this email.
>
Pos Malaysia Berhad is Malaysia's national postal company
Visit us online at www.pos.com.my
NOTICE
This message may contain privileged and/or confidential
information. If  you are  not the addressee  or authorised
to  receive this email, you must not use, copy,  disclose
or take any  action based  on this email. If you  have
received this  email in  error, please advise  the sender
immediately by  reply e-mail and delete  this message.
Pos Malaysia  Berhad takes  no responsibility  for  the
contents of this email.
Fred Johanson
ITSM Administrator
University of Chicago
773-702-8464
Pos Malaysia Berhad is Malaysia's national postal company
Visit us online at www.pos.com.my
NOTICE
This message may contain privileged and/or confidential
information. If  you are  not the addressee  or authorised
to  receive this email, you must not use, copy,  disclose
or take any  action based  on this email. If you  have
received this  email in  error, please advise  the sender
immediately by  reply e-mail and delete  this message.
Pos Malaysia  Berhad takes  no responsibility  for  the
contents of this email.



Re: TSM/Veritas on the same library

2005-05-06 Thread Steven Harris
I've gotta ask Geoff,
Why, with a 3494 and LTO library would you move TSM to LTO? My first option
would be to move veritas to the 3494.
3494 is industrial strength and plays well in multi host and multi
application environments.  Use 3592 drives and you'll have capacity to burn.
Also,  second-hand frames to provide additional storage are quite
economical.  Second-hand is viable here - what can go wrong with a dumb
storage rack? and you'll have no trouble getting IBM maintenance coverage
for it if that is important to you.
Regards
Steve.
Steve Harris
"Resting" in Brisbane, Australia
- Original Message -
From: "Gill, Geoffrey L." <[EMAIL PROTECTED]>
To: 
Sent: Saturday, May 07, 2005 11:29 AM
Subject: [ADSM-L] TSM/Veritas on the same library

Since there was only one response I think I need to give a bit of info to
help with why I'm asking the question.

We have TSM and Veritas here. Not something I wanted but its here.
Currently
each reside on their own libraries mainly because one is LTO2(Veritas) and
the other a 3494(TSM). I'd like to start moving TSM LTO this year. We are
in
the midst of standing up a DR site and I'm trying to see how we can bundle
this together. So instead of standing up 2 libraries I'd like to stand up
one large enough to handle both the Veritas and TSM backups. I'd like to
add
on to what we have here if they could live together nicely.

The 3494 we have is currently shared by the mainframe and 2 TSM servers.
We
already know we have to bring up a 3494 at the DR site for the
mainframe/TSM
and another for Veritas. I would like however to send part of the 3494 we
have now to handle portions of the TSM backups along with getting hardware
for the IBM. If we stood up an LTO library here to start migrating off the
3494 that is doable. I would then need LTO at the DR site for TSM but
would
like to see if a single large enough unit would suffice.

So my original question is still, can a LTO library be shared by both
systems provided they each have their own dedicated drives? Has anyone
done
this or think it can be done?

Any information, be it in the form of links to articles or direct
knowledge
would be greatly appreciated.

Thanks,
Geoff



Re: Novell Groupwise Online backup

2005-05-09 Thread Steven Harris
Hi Mario
You don't say what environment groupwise is running on - I'll assume netware
6
The issue is that the underlying groupwise files are mostly a hashed file
structure and so every file changes every day, also, there are internal
links between the components. In my last position we were working on this
and this what what I had come up with.
Use the groupwise gwbackup utility to backup the "database" to a file system
that is a mapped windows share.  Then, use the windows adaptive subfile
backup from the Windows server to TSM.
The gwbackup utility is available from Novell [see tid 2929217] and is
designed to create consistent backups.  Using the adaptive subfile backup
means that it is possible to do meaningful incremental backups.
Test carefully - this was a planned implementation but my contract ran out
and I left that role before we had a chance to properly test.
Regards
Steve
Steve Harris
"Resting" in Brisbane, Australia
- Original Message -
From: "Mario Behring" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, May 10, 2005 6:41 AM
Subject: [ADSM-L] Novell Groupwise ONline backup

Hi list,
I need to perform an online backup of a Novell Groupwise environment using
TSM ... does anybody knows how can I accomplish this ?
Thanks.
Mario

Yahoo! Mail
Stay connected, organized, and protected. Take the tour:
http://tour.mail.yahoo.com/mailtour.html



Re: Keeping Disk Copies of Files - Storage Pool Cache or ????

2005-05-10 Thread Steven Harris
Hi Christopher,
As far as I am aware there is no way to do what you are asking other than
keeping *all* versions of a file on disk.
The only tool that you have to hold data in a primary storagepool is the
migdelay parameter, and this is does not differentiate between active and
inactive data.  It would be nice if it did.
Regards
Steve.
Steve Harris
"Resting" in Brisbane, Australia
- Original Message -
From: "PEEK, CHRISTOPHER" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, May 11, 2005 1:38 AM
Subject: [ADSM-L] Keeping Disk Copies of Files - Storage Pool Cache or 
I am trying to implement a backup process where a complete set of file
copies are kept on disk in addition to tape.  Our current process is a
very traditional basic setup with backups going to a single disk storage
pool that then migrates to tape during the day.  After the migration
completes, the process then makes multiple tape copies for offsite and
onsite vault storage.
I have the following questions:
1) What is the best process to keep a disk based version of the most
recent file copy?
2) Is it enabling the cache for the disk storage pool and increasing the
storage pool size to hold the copies; or is it better to go with a disk
to tape (d2t)/disk to disk to tape (dd2t) dedicated device?
3) Is anyone using the storage pool cache?  What is the size of the
storage pool and how has performance been impacted?


Just for reference, here are the details of our TSM environment.
Current TSM Server OS: AIX 5.1 ML 07
Current TSM Server Ver: 5.2.2.14
Disk Storage: DS4400 (FastT 700) SAN - Fibre Channel
Tape Storage: 3583-L72 with 3 LTO2 fibre drives


Re: Export Node Question: Where will it go?

2005-05-25 Thread Steven Harris

Joni,

predefine your node on the destination server in the domain where it should
reside.

Regards

Steve

Steve Harris

AIX and TSM Administrator
Still! "Resting" in Brisbane Australia


- Original Message -
From: "Joni Moyer" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, May 26, 2005 5:06 AM
Subject: [ADSM-L] Export Node Question: Where will it go?



Hello,

I am trying to move our data from the mainframe tsm server to our aix tsm
server.  Now that communications between adsm(mainframe) and tsmprod(aix)
tsm servers has been established and I have run the export with
previewimport=yes I am ready to actually move the data.  I noticed that I
receive  a message that domain SP2 doesn't exist so it's going to try to
import it to the STANDARD domain.  TSMPROD doesn't have a standard domain.
I created a domain called "retired" where I have registered the node
pgsp011, but when I ran the export it doesn't seem to recognize this.  Do
I
have to become pgsp011 and connect to tsmprod as pgsp011 so that
communications between them have been established and then run the export
node command?  I do not want the SP2 domain on TSMPROD and I don't know
how
else to do this...  Thanks in advance!

05/25/05 14:14:33 ANR0615I IMPORT (from Server ADSM): Reading EXPORT
NODE d
  ata from server ADSM exported 05/25/05 14:14:27.
(SESSIO
  N: 505113, PROCESS: 10731)

05/25/05 14:14:33 ANR0649I IMPORT (from Server ADSM): Domain SP2 does
not e
  xist - the system will attempt to import node
PGSP011 to
   domain STANDARD. (SESSION: 505113, PROCESS: 10731)

05/25/05 14:14:33 ANR0635I IMPORT (from Server ADSM): Processing node
PGSP0
  11 in domain STANDARD. (SESSION: 505113, PROCESS:
10731)
05/25/05 14:14:33 ANR0636I IMPORT (from Server ADSM): Processing file
space
   /usr for node PGSP011 as file space /usr. (SESSION:
505
  113, PROCESS: 10731)

05/25/05 14:14:33 ANR0636I IMPORT (from Server ADSM): Processing file
space
   /m2 for node PGSP011 as file space /m2. (SESSION:
50511
  3, PROCESS: 10731)

05/25/05 14:14:33 ANR0636I IMPORT (from Server ADSM): Processing file
space
   /m3 for node PGSP011 as file space /m3. (SESSION:
50511
  3, PROCESS: 10731)

05/25/05 14:14:33 ANR0636I IMPORT (from Server ADSM): Processing file
space
   /m4 for node PGSP011 as file space /m4. (SESSION:
50511
  3, PROCESS: 10731)

05/25/05 14:14:33 ANR0636I IMPORT (from Server ADSM): Processing file
space
   /m5 for node PGSP011 as file space /m5. (SESSION:
50511
  3, PROCESS: 10731)

05/25/05 14:14:33 ANR0636I IMPORT (from Server ADSM): Processing file
space
   /m6 for node PGSP011 as file space /m6. (SESSION:
50511
  3, PROCESS: 10731)

05/25/05 14:14:33 ANR0636I IMPORT (from Server ADSM): Processing file
space
   /m9 for node PGSP011 as file space /m9. (SESSION:
50511
  3, PROCESS: 10731)

05/25/05 14:36:19 ANR0616I IMPORT (from Server ADSM): Preview
processing co
  mpleted successfully. (SESSION: 505113, PROCESS:
10731)
05/25/05 14:36:19 ANR0620I IMPORT (from Server ADSM): Copied 0
domain(s). (
  SESSION: 505113, PROCESS: 10731)

05/25/05 14:36:19 ANR0621I IMPORT (from Server ADSM): Copied 0 policy
sets.
   (SESSION: 505113, PROCESS: 10731)

05/25/05 14:36:19 ANR0622I IMPORT (from Server ADSM): Copied 0
management c
  lasses. (SESSION: 505113, PROCESS: 10731)

05/25/05 14:36:19 ANR0623I IMPORT (from Server ADSM): Copied 0 copy
groups.
   (SESSION: 505113, PROCESS: 10731)

05/25/05 14:36:19 ANR0624I IMPORT (from Server ADSM): Copied 0
schedules. (
  SESSION: 505113, PROCESS: 10731)

05/25/05 14:36:19 ANR0625I IMPORT (from Server ADSM): Copied 0
administrato
  rs. (SESSION: 505113, PROCESS: 10731)

05/25/05 14:36:19 ANR0891I IMPORT (from Server ADSM): Copied 0
optionset de
  finitions. (SESSION: 505113, PROCESS: 10731)

05/25/05 14:36:19 ANR0626I IMPORT (from Server ADSM): Copied 1 node
definit
  ions. (SESSION: 505113, PROCESS: 10731)

05/25/05 14:36:19 ANR0627I IMPORT (from Server ADSM): Copied 7 file
spaces
  14755 archive files, 0 backup files, and 0 space
managed
   files. (SESSION: 505113, PROCESS: 10731)



Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]






Re: Migration of TSM server

2005-06-02 Thread Steven Harris

Neil

Some in the past have reported success in moving a database between
platforms by using a database backup on an NFS mounted volume.  I
daresay that a SMBFS transfer would accomplish the same thing.  There
may also be a big/little endian conversion issue between Sparc and
Intel hardware.  If there isn't,
a network based backup and restore may be worth a try at least.

Regards

Steve

Steven Harris

AIX and TSM Administrator
About to be employed again at last!


On 02/06/2005, at 1:56 AM, Neil Bowen wrote:


Dear all,

Normally, I'd Google and read up but I need some feedback pdq. Our TSM
server is a Sun 280R with 1 x 750MHz CPU, 1GB RAM sunning Solaris 2.8
but, to be honest, this system is underspecified and sometimes
can't cut
it. The TSM is 5.2.1. The tape library is a fibre-attached Adic Scalar
1000 with 6xLTO1 drives and 188 tapes and it mostly backs up Windows
stuff.

The powers-that-be wish to migrate this to a Windows 2003 box using
the
same tape library - and soon. It's a political decision and not really
up for debate at the moment. The first TSM consultants that I've
approached have said that we cannot retain the legacy data and that if
we install TSM onto a Windows server, all of the data will be lost
unless we dual-access the library and retain the Solaris TSM. As the
library is full and then some, that isn't really an option even if the
politics wouldn't get in the way.

My question - Is this the case?

My skills are Windows and we do have some good Solaris people.

Regards,

Neil Bowen


**

Ofcom is the independent regulator and competition authority for
the UK communications industries, with responsibilities across
television, radio, telecommunications and wireless communications
services.

For further details and to register for automatic updates from
Ofcom on key publications and other developments, please visit
www.ofcom.org.uk

This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom
they are addressed.

If you have received this email in error please notify the
originator of the message. This footer also confirms that this
email message has been scanned for the presence of computer viruses.

Any views expressed in this message are those of the individual
sender, except where the sender specifies and with authority,
states them to be the views of Ofcom.
**






Re: Storage Pool Backup Issue

2005-07-01 Thread Steven Harris

Joni,

In the past I have run two scripts, one for disk to tape and one for
tape to tape, and I just threw everything at TSM  at once and let it
handle the tape allocation.

I used a schedule that checked for a backup stg process and either
scheduled itself for five minutes into the future or kicked off phase
two, and a similar one for phase three (db backup).  Of course in TSM
5.3 you would use the new parallel processing commands instead.

The result is not quite as optimal as it might be, but it is nearly
so, and it lets the smarts in TSM do what they are there for.

HTH

Steve.

Steven Harris

AIX and TSM Administrator
Sydney Australia

On 30/06/2005, at 12:58 AM, Joni Moyer wrote:


Hello Everyone!

I have an issue of drive shortage and I would like to cascade my
backup
jobs of my storage pools.  I know that you first want to make the
disk to
offsite tape copy and then the onsite tape to offsite tape copy for
each
storage pool.  My question is, how do I combine all 14 jobs in a
cascading
order so that they are only using a maximum of 6 drives at a time?  I
thought about putting them all within a script and running them with
wait=yes but I don't think that will give me exactly what I want to
do.  If
anyone has any suggestions I would appreciate the input!  Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]






Re: Domino best practice

2005-07-12 Thread Steven Harris

Hi Gang,

I've recently taken over AIX 5.2 and TSM 5.2.0.0 on a couple of
servers running a domino mail environment.

We're currently running a TDP incremental and log archive every day
and a selective every weekend.  I've not got a situaution requiring
multiple restores of the same .nsf file over a an extended period and
find the rollforward of the database to be excruciatingly slow.

What is the usual backup pattern for some of you larger domino sites
out there?


Steven Harris

AIX and TSM Administrator
Sydney, Australia


Re: Domino logfiles on disk

2005-07-12 Thread Steven Harris

Hi all,

AIX 5.2 client and server, TSM 5.2.0.0 client and server, domino
client "5.2" but actually 5.1.5

I'm trying to restore and roll forward the same domino database
repeatedly to different points in time as an important user
incrementally deleted all his mail over a two-week period.

The domino client keeps restoring the same logfiles for each
rollforward, despite the fact that they are already on disk.  Is
there any way to avoid doing this?

Alternatively, is there any way to incrementally roll foward a domino
database,
i.e. restore at t0 roll forward to t1 and take a copy to elsewhere,
then roll forward to t2 and repeat ad nauseam.   The standard process
seems to want to restore at t0 and roll forward from there every time.

Regards

Steve.

Steven Harris

AIX and TSM Administrator
Sydney, Australia


Re: Announcing IBM Support Assistant for TSM

2005-07-13 Thread Steven Harris

Not trying to shoot the messenger here, but in my world, things are
getting increasingly restricted.

My AIX boxes are not permitted to connect to the internet, and I have
a SOE'd windows PC on my desk and a userid with extremely limited
rights.  Being able to download something onto my pc and install it,
even if it is neccessary, requires lots of paperwork and convincing
people who would rather not allow exceptions to their SOE.

Unfortunately with Windows XP, security can be such that you can't
even drop an icon onto the desktop, let alone install anything.

So my vote is for web based tools requiring nothing at the client
end.  KISS principle applies here.


Steven Harris

AIX and TSM Administrator



On 13/07/2005, at 12:27 AM, Darrius Plantz wrote:


The TSM development staff has just finished the initial
implementation of a tool that will assist Tivoli Storage Manager
admins with their support issues.   It is called IBM Support
Assistant ( ISA) and is available via the download section at the
TSM Support page:  http://www-306.ibm.com/software/sysmgmt/products/
support/IBMTivoliStorageManager.html

ISA is a cross-product extensible client application that increases
your capacity for self-help by making it easier to access support
resources.  ISA allows you to add plug-ins for the IBM products you
use, creating a customized support experience for your configuration.

The Tivoli Storage Manager Server product plug-in enhances the
Search, Education, Support Links and Service component of IBM
Support Assistant.  As more plugins become available for TSM we
will announce them here.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com





Re: Which TDP is used for backing up SAP B/W and IBM x-world/WBI

2005-07-21 Thread Steven Harris

SK

It may be that SAP is running the standard BACKINT process, which at
its bottom calls the DB2 backup command, and that in turn uses the
built in DB2 to TSM interface.

There is a TDP for SAP R3, but it is quite expensive and not many use
it.

Regards

Steve.

Steven Harris

AIX and TSM Administrator


On 21/07/2005, at 1:59 PM, Tsm Lover wrote:


Hi Tsm Experts,

Can anybody suggest what TSM PRODUCT(TDP agent) is used for backing
up the following applications:

1.SAP B/W(Business Information warehouse) on DB2 8.2 database.
2.IBM x-wordl or WBI(Webspher Business Integeration).

Hoping to hear from sombody.

Thanks and Regards
S.k


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com





Re: Backup strategy for Lotus Domino

2005-07-29 Thread Steven Harris

Hi Nicholas,

It depends on the requirement.  Is it to be able to reproduce a database at
any point in the last five years, or to be able to produce a point-in-time
copy at some point near to the desired time?  If its "near to" they want,
what is the granularity needed?

To be able to do the point in time you will need to keep all of the weekly
backps and all of the incremetals and all of the transaction logs.  Thats
190GB * 52 * 5 = 50 TB in weeklies alone, let alone all the rest and with no
allowance for growth.  The alternative is to run a separate periodic cycle
(possibly monthly or bi-monthly).  This could be to another TSM node as a
domino backup with all the extra baggage that entails, or it could be an
archive of the domino data from the OS client - this has implications for
open files and may require you to shut down domino while the backup happens
to ensure a clean one - however it is more flexible in that you could keep,
say, weekly backups for six months and the first one of every month for the
full five years or some similar scheme.

Realistically though I doubt that you will ever do a restore.  This sounds
like either a dumb legal requirement or someone intent on *ss-covering
without thought to cost or practicality.  Cost it in full, don't skimp or
economise in your calculations, and don't forget the administrative
overhead, then let management decide if it is worth it to them. Oh, and
build in some fat so that when they come back and ask you to reconsider
because its too expensive, you can give them a 10% reduction fairly easily.

Regards

Steve.

Steve Harris
AIX and TSM Admin.

- Original Message -
From: "Nicolas Savva" <[EMAIL PROTECTED]>
To: 
Sent: Friday, July 29, 2005 6:54 PM
Subject: [ADSM-L] Backup strategy for Lotus Domino



Hi to all

I am using TDP for Lotus Notes 5.1.5 for backing up Lotus Domino (size
190GB) onto 3590 cartridges. I use the following backup strategy:

Lotus Domino is using Archive Logging.

  Perform weekly selective backups (size 190GB) of all logged databases,
  every Saturday
  Archive the transanction log frequently each day (every 4 hours)
  Run incremental backups every night to backup logged databases that
  their DBIID has changed
  Run the inactivatelogs command in order to expire non-essential
  transaction log files. I am running this command after the full backup.

How can i setup the backup copy group in order to be able to restore a
database 5 years old (keep backups for 5 years)?


Thanks in advance

Nicolas Savva





Domino, scheduler and root id

2005-08-10 Thread Steven Harris
Hi All,

AIX 5.2, TSM Server 5.2.0.0 (working on an upgrade), Client 5.2.0.0, domino
client 5.1.5.0, domino 6.5.4

My predecessor set up this environment using AIX srcmgr facilty to run a dsmc
sched process for each domino instance - there are 8 on this machine, and they
run under the root id. The domino instances themselves have a different unix id
for each instance.  Each instance is logically separate with its own file
systems, domino binaries and a tsm directory that contains domdsm.cfg, dsm.opt,
logs and a security directory containing the TSM.PWD file for the instance.  We
can literally export a couple of volume groups and import them elsewhere to
move a domino instance to another AIX lpar.

Domino backups are scheduled using a command schedule, and in the script the
backup is run under the unix ID for the instance.

The problem is that when PASSWD GENERATE does its thing, the TSM.PWD file is
deleted and re-created with the new password.  This is done by the scheduler
process and so the new TSM.PWD file has root ownership.  Thus the backups fail
as they can't access the new encrypted password.

So, I've tried to fix this by running dsmc sched as the domino user, but I get

ANS1817E Schedule function can only be run by a TSM authorized user.

I've set up a TSM admin with node ownership, and I can run dsmc command line as
the domino user, but not the scheduler.  dsmcad won't work either.

Is there any solution other than running everything as root or resorting to
cron?  I'd like to domino admins to be able to check logs and don't want them
to have root access, but using separate users also has nice safeguards when it
comes to restoring in the right environment.

TIA

Steve

Steve Harris
AIX and TSM Admin
Sydney, Australia


Re: only scratch/new tapes ???

2005-08-22 Thread Steven Harris

Dierk,

There is no need to reclaim tapes every day.  If each day's tape is
only partly used, it may be more efficient to only reclaim once a week.

Steve


Steven Harris

AIX and TSM Administrator
[EMAIL PROTECTED]


On 22/08/2005, at 7:51 PM, Dierk Harbort wrote:


Hello TSMers !

Is there a way to force TSM to use tapes with the status 'filling'
instead
of scratch/new tapes, so it would reduce reclamation processes ?
Why I ask this:
The copy pool for offsite location (drm) needs to be reclaimed,
like other
stgp do; but by using always new/scratch tapes, there is more
reclamation than necessary, I think.

Any idea is welcam, thanks in advance

Dierk





Re: [api] Admin API commands

2005-09-04 Thread Steven Harris
Maurice,

As I understand it the new web admin tools in 5.3 employ a new administrative 
api.  Now, if
we could just get that published so we could use it.


Regards

Steve

Steven Harris

AIX and TSM Administrator
Sydney, Australia



> Hi,
>
> Becaurse the current TSM Admin tools are pretty complex, i want to write an 
> admin client
like the good old 3.1 version for Windows; simple as possible, so even the 
part-time TSM'ers
in small environments can do simple management.
>
> Of course i can use dsmadmc as "interface" between the tool and the server, 
> but is there
also an API set anywhere what i can use for admin commands, so i can keep all 
the code in 1
file?
>
> And for the people who's interested: I will keep it open source freeware
>
> Regards,
> Maurice van 't Loo
>
> PS. If Tivoli decides to make the 3.1 windows admin client open source, i 
> will be more
happier of course :-)
>
>

--


Re: backup stgpool -- everything offsite

2005-09-08 Thread Steven Harris

Allen,

Excuse the obvious question, but what happens if there is a disaster
at the *remote* site?  Whilst you'll still have your current disk
copies of everything, you'll lose all your historical stuff, and
those "never to be deleted" backups that managers are so attached
to.  You'll also have no capacity to perform day to day backups
without the remote site, should it be out for an extended period.

Finally, you are also making your whole strategy dependent on
something over which you have no control, ie, the reliability and
redundancy of the network carrier.  I wouldn't trust any of our local
ones in the long term, as I'm sure some bean-counter will pare their
maintenance budget past the minumum, and network failures *will* result.

Regards

Steve.

AIX and TSM Admin
Sydney Australia

On 09/09/2005, at 12:47 PM, Allen S. Rout wrote:


==> On Thu, 8 Sep 2005 12:40:41 -0700, Andrew Raibeck
<[EMAIL PROTECTED]> said:




[...I]nstead of having one onsite copy pool, why not just have two
offsite
copy pools. Again, assess the risks: what are the odds that you will
actually need your copy storage pools in the event that a non-offsite
recovery is needed? If the odds are very low, then consider just
having your
primary storage pools onsite, and if you need copy storage pool
volumes,
retrieve them from the offsite location if and when that becomes
necessary.



I'd like to underscore this notion.  We're planning along precisely
these
lines: Remote TSM server infrastructure, and "sufficiently"
reliable network
between (1Gb, with expansion available up to 10Gb), with the remote
copy pools
eventually being the only copy pools.

I was initially thinking that I would still maintain a local copy
pool, but my
boss asked me, "If the network is good enough for DR, why isn't it
good enough
for a dropped tape?". ...

Heh, felt dumb at that one.

I'll probably still retain an extra, onsite copy of the TSM DB
backups, but
that's just normal paranoia. :)




The command line is your friend.




Preach it, brother!


- Allen S. Rout





TDP for Domino weirdness

2005-09-26 Thread Steven Harris
Hi All.

I'm attempting to wrap some perl around TDP for domino commands, and I want the 
result to
be domino-instance agnostic as it will run on multiple domino instances on 
multiple AIX
partitions (TSM Server 5.3.1.4, API 5.3.0 TDP for domino 5.1.5.1 all on AIX 5.2 
RML6. Domino
is 654+hotfixes)

The weirdness is as follows. these two commands were run one after the other - 
no editing-
, as the domino user

dommp02:/domino/dom/mp02/data > domdsmc q dbb | head -20

IBM Tivoli Storage Manager for Mail:
Data Protection for Lotus Domino
Version 5, Release 1, Level 5.01
(C) Copyright IBM Corporation 1999, 2002. All rights reserved.

 Database Backup List
 

Domino Server: dommp02
--

   DB Backup Date Size  A/I  Logged  Database Title   Database File
-  ---  ---  --  --   -
23-09-2005 20:05:58   648.00KB   AYesJava AgentRunne  
AgentRunner.nsf
23-09-2005 20:07:46 3,000.25MB   AYesMail Journaling  MJ06.nsf
09-09-2005 20:17:15 3,584.00KB   AYesActivity Trends  activity.nsf
19-09-2005 20:17:57   254.53MB   AYesAdministration   admin4.nsf
02-09-2005 20:17:30   126.00MB   AYesCatalog (6)  catalog.nsf
13-07-2005 09:31:07 1,128.00KB   ANo Catalog (6)  catalog.ntf
dommp02:/domino/dom/mp02/data >  perl -e 'print qx/domdsmc q dbb/' | head -20

IBM Tivoli Storage Manager for Mail:
Data Protection for Lotus Domino
Version 5, Release 1, Level 5.01
(C) Copyright IBM Corporation 1999, 2002. All rights reserved.

ACD5811I There are no database backups matching the filespec * and the server 
name
dommp02.


ie, the shell  finds the backups and perl running the same command in a spawned 
shell
doesn't.

If I use domdsmc_instancename it works in both cases, but I don't want to 
include the
instancename in my script.

Ideas as to why its broken and how to fix it will be gladly accepted.


Re: Notes selective incremental management class

2005-09-26 Thread Steven Harris
Kurt,

I think you might be missing this:  domino backups, even incremental ones, 
always backup
the whole nsf file.  The only "incremental" thing you can do is turn on archive 
logging and
save the logs.  The inactivatelogs command then will mark the logs as inactive 
when there is
no active database backup that needs them.  You can make a separate class for 
these logs as
they have a different name.

Hope this helps

Steve.

> Thanks for the answer but then how would you include the selective backups of 
> eg mail6/
*.nsf to the mgmt class MGMT_SEL and the incremental backups of them to 
MGMT_INCR
instead of both type backups to DOMINO in your example?
>
> I've found already the document that states that two different stanzas should 
> be created to
bind the backups to another management class. I've followed the setup on a test 
server and
ran first a selective backup using the mgmt class MGMT_SEL as default for the 
stanza
server_NOTES_FULL. However when I ran immediately afterwards the incremental 
backup with
the mgmt class MGMT_INCR now as default using the stanza server_Notes_INCR, it 
was again
a full backup. Is this because the first full backup was created on another TSM 
node name
(NOTES_FULL) and thus the incremental backup of the TSM node (NOTES_INCR) did 
not have a
full backup yet?
>
> The dsm.opt of server_NOTES_FULL is:
>
> COMMMethodTCPip
> TCPPort   1500
> TCPServeraddress  TSMSERVER
> TCPWindowsize 63
> TCPBuffSize   32
> NODename  server_notes_full
> PASSWORDAccessGenerate
>
> SCHEDMODE Prompted
> TCPCLIENTADDRESS Server_Notes
> TCPCLIENTPORT 1502
>
> COMPRESSIon   yes
> COMPRESSAlwaysNO
>
> INCLUDE *  MGMT_Notes_FULL
>
> and the dsm_incr.opt of server_NOTES_INCRis:
>
> COMMMethodTCPip
> TCPPort   1500
> TCPServeraddress  TSMSERVER
> TCPWindowsize 63
> TCPBuffSize   32
> NODename  server_notes_incr
> PASSWORDAccessGenerate
>
> SCHEDMODE Prompted
> TCPCLIENTADDRESS SERVER_Notes
> TCPCLIENTPORT 1502
>
> COMPRESSIon   yes
> COMPRESSAlwaysNO
>
> INCLUDE *  MGMT_Notes_Incr
>
> I will have to take a look at the transaction log backups too since the 
> incremental backup
would be almost a daily full backup based on the modification time of the nsf 
files.
>
> best regards,
> Kurt Beyers
>
>
>
>
>
>
> 
>
> From: ADSM: Dist Stor Manager on behalf of Zoltan Forray/AC/VCU
> Sent: Mon 26/09/2005 16:32
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Notes selective incremental management class
>
>
>
> Sure do. I just set these up.
>
> >From the TDP Book:
>
> When assigning a group of databases to a management class, you must assign
> both objects. For example, to assign all databases that match *.nsf in the
> mail6 subdirectory to the DOMINO management class, code the following
> statement:
>
> INCLUDE mail6/*.nsf* DOMINO
>
> Just put the INCLUDE statements in the proper stanza in the DSM.SYS (*nix)
> OPT(Windoze) file !
>
>
>
>
> Kurt Beyers <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" 
> 09/23/2005 06:49 AM
> Please respond to
> "ADSM: Dist Stor Manager" 
>
>
> To
> ADSM-L@VM.MARIST.EDU
> cc
>
> Subject
> [ADSM-L] Notes selective incremental management class
>
>
>
>
>
>
> Hello,
>
> Is it possible to bind the selective and incremental backups of Notes to
> different mgmt classes?
>
> For the TDP Informix, you can use a statement like
>
> include /inf_shm/inf_shm/*/0  MGMT_DB_FULL
> include /inf_shm/inf_Shm/*/1  MGMT_DB_INCR
>
> to bind the full backups (.../*/0) to the mgmt class MGMT_DB_FULL and the
> incremental backups (.../*/1) to the mgmt class MGMT_DB_INCR.
>
> Is there an equivalent statement for the Notes backup (tried already a
> look-alike statement, but it did not work)?
>
> thanks,
> Kurt
>
>

--


Re: How to set up an automated Archive job

2005-10-26 Thread Steven Harris
Troy,

there are exclude.archive and include.archive directives that can be used to 
define the default
archive MC for groups of files, and you can use DOMAIN= in the schedule to 
include and
exclude filesystems.  How do these not meet your needs?


Regards

Steve

Steve Harris, AIX and TSM Admin


> The other problem with automated archives is that there's apparently no way 
> to get them
to use your clopsets.  Because of that, you have to define what gets archived 
in the archive
schedule itself.  That means making separate schedules for every server.  If 
anyone else has
found a way around that nightmare, I'd be excited to hear it.
>
>
> Troy Frank
> Network Services
> University of Wisconsin Medical Foundation
> 608.829.5384
>
> >>> [EMAIL PROTECTED] 10/26/2005 3:36 PM >>>
>
> It's been so long since we installed TSM that I've forgotten the process
> used by the TSM outside contractor to set this up.
>
>
>
> I have to setup a monthly archive of a file server. I have the archive
> management class but can't figure out how to get the management class
> associated with the node. I tried setting it up once but my data expired
> after the standard 31 days.
>
>
>
> Mike LaFrance
> Network Analyst
> Prospera Credit Union
> Tel 604-864-6638 Fax 604-850-3473
> [EMAIL PROTECTED]
> mailto:[EMAIL PROTECTED]>
>
>
>
>
>
> This email and any files transmitted with it are confidential and are
> intended solely for the use of the individual or entity to whom they are
> addressed.
> If you are not the original recipient or the person responsible for
> delivering the email to the intended recipient, be advised that you have
> received this email in error, and that any use, dissemination, forwarding,
> printing, or copying of this email is strictly prohibited. If you receive
> this email in error, please immediately notify the sender.
>
> Please note that this financial institution neither accepts nor discloses
> confidential member account information via email. This includes password
> related inquiries, financial transaction instructions and address changes.
>
>
>
> Confidentiality Notice follows:
>
> The information in this message (and the documents attached to it, if any)
> is confidential and may be legally privileged. It is intended solely for
> the addressee. Access to this message by anyone else is unauthorized. If
> you are not the intended recipient, any disclosure, copying, distribution
> or any action taken, or omitted to be taken in reliance on it is
> prohibited and may be unlawful. If you have received this message in
> error, please delete all electronic copies of this message (and the
> documents attached to it, if any), destroy any hard copies you may have
> created and notify me immediately by replying to this email. Thank you.
>
>

--


Domino log rollforward recovery

2005-11-10 Thread Steven Harris
Hi all

I'm running Domino 654 on AIX 5.2 backing up using shared memory to TSM 5.3.1.4.
Domino Client is 5.1.5.1.  Backend is a 3584 with 6xLTO2 drives.

We've been cutting over our email users from their old product and so email 
volumes have
increased dramatically.

Currently we run a selective once per week, incrementals daily (about 5% of 
databases get
compressed on any given day) and archivelog backups every four hours.  Our 
busiest domino
instance produced over 2600 x 64MB transaction logs last week.

Now, when we had to do some serious roll forward recoveries a while back, it 
took about 2 to
3 minutes per log to do the restore and roll forward.  Thus we are looking at a 
worst case of
something like 18 hours to roll forward a full day's logs for 1 database, or a 
bit over 5 days
for a week's work.

So, yes, we will have to to selectives more frequently, probably half of the 
farm every night,
but we also need to speed up the log apply process.  I'm  thinking that 
increasing  the size of
the domino transaction logs will probably help with this, and I've got my 
Domino admins
researching their end, but would appreciate input from users on this list.

Thanks

Steve Harris
AIX and TSM Admin
Currently in Sydney, Soon back in Brisbane, Australia


ISC install woes

2005-11-14 Thread Steven Harris
Hi All

I'm trying to install the ISC and Admin Center on AIX 5.2 64 bit.
ISC and Admin Center are the 5320 versions hot off the presses :)

I've had several false starts, but managed to get back to a clean slate for 
everything to try a
fresh install of ISC, but I keep getting, in the 
.../Tivoli/dsm/logs/ac_install.log

After the install process tries to run .../fixup/ISCCheck.jar

CWLAA9084 Service ID isc6 is already in use. It must be unique.

Net searches have been inconclusive, but possibly point to a JNI service.  I 
can see no
processes running that can be registering this in memory, so it must be in a 
file somewhere.
Any guesses as to where this might be and/or how I can undo the registration 
using simple
unix tools?

Thanks

Steve.


Re: ISC install woes - Solved

2005-11-17 Thread Steven Harris
Indeed it was an inittab entry.  Thanks to RClark and some kind folks at TSM 
development.

Steve


==
Date: 11/15/2005 18:04:47 -0800
From: RClark@
To: [EMAIL PROTECTED]
Subject: Re: [ADSM-L] ISC install woes  

isc6 looks suspiciously like something that would be added to inittab to
start the ISC at reboot.

I suggest you check the output of "lsitab -a" to see if there is an old
entry from a previous install try in inittab.

Rainer Tammer also posted a note to the adsm-l list about manual uninstall
docs being included with ISC. If you search for "stuck ISC install" at
"http://my.adsm.org/adsm-l.php";; the info should pop right up.


Re: TDP for Oracle and Backup Set Creation

2005-11-28 Thread Steven Harris
Muthu

You can generate a backupset for TDP data, but you can't restore from it.

Use export instead.

Regards

Steve

Steve Harris
AIX and TSM Admin
Brisbane Australia


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Muthukumar Kannaiyan
> Sent: Tuesday, 29 November 2005 8:30 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] TDP for Oracle and Backup Set Creation
>
>
> We are planning to create backup set for a node which usesTDP
> for Oracle. How does TSM will generate backup set for this
> node?. Is it possible to create?. If so, is there a procedure
> to recover from this backup?
>
> TIA
> Muthu
>
>
>


Re: Split Storage Pool across Devclass

2005-12-09 Thread Steven Harris

Hi Pretorius.

no, you can't directly spread a tape pool across multiple libraries,
if that is what you are asking.

You have two choices,

The first is to split the hierarchy into multiple node -> diskpool ->
tapepool streams.  Data is directed into one or the other streams
using different managementclasses, which are assigned via client
options (options file or client option sets), or you can set up more
than one policy domain and have the classes in one domain point to
one hierarchy and in the other point to the second, then assign nodes
to domains to split the load.

The easier option may be to set up your second tape device as the
next pool in the hierarchy

e.g.  diskpool->tape1pool->tape2pool

and just as you would from disk to tape you can set tape1 to tape2
migration thresholds, force migrations and so on.

A couple of caveats though,  you can if i recall correctly, only use
one migration process per sequential storage pool, so the tape to
tape migration may be slow and tie up drives for along time.  Also,
when migrating, TSM moves the node that has the most data in this
storagepool first, and this may be undesirable depending on the
particlular cirumstances on your server.

Of course, combinations of these two approaches are also possible.


HTH
Steve


Steven Harris

AIX and TSM Administrator
[EMAIL PROTECTED]


On 10/12/2005, at 7:27 AM, Pretorius Louw <[EMAIL PROTECTED]> wrote:


Hi guys, I dont know if this is a stupid question but I cant find this
anywhere.

Im running low on space and want to split my Tapepool across more
than 1
TapeDevice, is this possible?

Cheers

Louw Pretorius

___

Informasie Tegnologie

Stellenbosch Universiteit



There are only 10 kinds of people in the world: Those who understand
binary and those who don't






TDP for R3 Slow restore performance.

2005-12-11 Thread Steven Harris
Hi all.

Recently our SAP team cloned the PRD Sap environment to test.  The best
restore throughput they could achieve was 8GB per hour.  Accordingly I
started investigating.

The SAP TST system is a 4 way IBM X Series 350 running Windows 2000 SP4 to
FastT 900 disk.  Database is Oracle 9.
TSM Client is 5.3.0.15 and the TDP for ERP is 3.3.12.

TSM Server is 4 way X series 335 running Win 2003 SP1 and TSM 5.3.1.2.  Tape
is a couple of 3584s with LTO1 and 3 drives, all fibre connected.

Running the native backint command to backup a single oracle file of 2GB
size, I got 60GB/hour throughput.  However restoring this file to a
different directory on the same disk gave me about 8.5 GB/hour.

I backed up the file using a standard gui archive, and got 60GB/hour on both
the backup *and* the restore.  The DSM.OPT parameters for BA client and SAP
client are the same.  I've played with TCPBUFFSIZE and TCPWINDOWSIZE to no
avail. 

I've read the manual forwards and backwards, and am at a loss as to how to
proceed.  Any ideas will be gladly accepted.

Many Thanks

Steve.

Steve Harris
AIX and TSM Admin
Brisbane, Australia



Here is the relevant qnitsv-bkp01.opt file 
COMMmethod TCPIP
SLOWINCR   NO
COMPressionOFF
NODEname   TSVSAP03_SAP
TCPPort1500
TCPServeraddress   qnitsv-bkp01
PASSWORDACCESS PROMPT
TCPBUFFSIZE32
TCPWINDOWSIZE  63
TCPNODELAY  YES
 
And here is the relevant initTST.utl file

#--
#
# Data Protection for mySAP.com(R) technology interface for ORACLE
#
# Sample profile for Data Protection for mySAP.com(R) technology
# Version 3.2 for Windows NT/2000/XP
#
#--
#
# See the 'Data Protection for mySAP.com(R) technology Installation & 
# User's Guide' for a full description.
#
# For comment symbol the character '#' can be used.
# Everything following this character will be interpreted as comment.
#
# Data Protection for mySAP.com(R) technology3 V3R3 accesses its profile 
# in "read only" mode only. All variable parameters like passwords, date of 
# last password change, current version number will be written into the file
# specified with the CONFIG_FILE parameter. The passwords will be encrypted.

#--
# Prefix of the 'Backup ID' which will be used for communication with SAPDBA
# and stored in the description field of the Tivoli Storage Manager archive
# function.
# Maximum 6 characters.
# Default: none.
#--
BACKUPIDPREFIX  TST___


#--
# Number of parallel sessions to be established.
# Note: this number should correspond with the number of simultaneously 
# available tape drives specified for the Tivoli Storage Manager server.
# Default: none.
#--
MAX_SESSIONS1 # 1 Tivoli Storage Manager client session


#--
# Number of parallel sessions to be established for the database backup.
# Note: this number should correspond with the number of simultaneously
# available tape drives specified for the Tivoli Storage Manager server.
# Default: none.
#--
#MAX_BACK_SESSIONS  1 # 1 Tivoli Storage Manager client session for
backup


#--
# Number of parallel sessions to be established for the archive log backup.
# Note: this number should correspond with the number of simultaneously
# available tape drives specified for the Tivoli Storage Manager server.
# Default: none.
#--
#MAX_ARCH_SESSIONS  1 # 1 Tivoli Storage Manager client session for
archive


#--
# Number of parallel sessions to be established for the restore of files.
# Note: this number should correspond with the number of simultaneously
# available tape drives specified for the Tivoli Storage Manager server.
# Default: none.
#--
#MAX_RESTORE_SESSIONS   1 # 1 Tivoli Storage Manager client session for
restore


#--
# Number of backup copies of the archived redo logs.
# Default: 1.
#--
#REDOLOG_COPIES 2   # 1 is default


#--
# Specifies whether a null block compression of the data is to 

Re: Retrieve with TSM WEB interface

2005-12-12 Thread Steven Harris
Chantal,

This is an FAQ

The TSM processes run under the local SYSTEM account by default and so don't
have access to network shares.
Change them to run under an account that has access.

Steve

Steve Harris
AIX and TSM Admin
Brisbane Australia

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
> On Behalf Of Boileau, Chantal
> Sent: Tuesday, 13 December 2005 6:53 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Retrieve with TSM WEB interface
> 
> 
> Hello eveyrbody, 
> 
>   I have implemented TSM Client acceptor on a cluster 
> windows 2003 SP1, with TSM 5.3 client. The TSM client 
> acceptor functionne properly. 
> 
>   But when I tried to restore a file on a network place, 
> the GUI can't see any of the network drive.
> 
>   When I installes TSM Client acceptor  on a non cluster 
> server the restore on a network drive functionne properly.
> 
>   Does somebody have any Idea ?
> 
> Thanks, best regards
> 
> Chantal
> 
> 
> 
> > Chantal Boileau
> > Analyste Informatique Stockage
> > Infrastucture des Serveurs
> > SAQ
> > * : 514-253-6232
> > *  : 514-864-8471
> > *  mailto:[EMAIL PROTECTED]
> > *  :http://www.saq.com
> > 
> > 
> > 
> 
> 
> 


Adaptive Subfile Backup and File Servers

2005-12-27 Thread Steven Harris

Hi All

Compliments of the season to all our regulars!

I have a client with a largish file server  on a Windows 2000
Cluster.  Analysis of a recent schedlog shows that on this particular
day, 87% of the data backed up was .pst files, something that I'm
sure is rather common where Outlook is the mail client.

Now the obvious suggestion is to use Adaptive Subfile Backup for
these and only back up the bits that have changed, but my boss
pointed out that the Adaptive subfile whitepaper suggests that its
use is not indicated for file servers.  The reason given for this is
that ASB cannot restore windows acls.  Now on this file server all of
the .pst files that I looked at inherited their permissions from
their parents so, I'm thinking that for the average Joe user, this
limitatioon would not be an issue. All restores on this system would
normally be done by the administrator anyway.

So, is the recommendation in the white paper correct?  Do any of you
use Adaptive Subfile with .pst files on file servers and if so have
there been any issues?

Also two supplementary questions.  The next largest category  of
backups on this server is access databases.  Again the whitepaper
recommends against using adaptive subfile on these.  Might I ask the
same questions for .mdb files?

Finally, I was looking at a Dave Canaan slide show from a 2003 Share
conference where he explained how ASB works.  There is one line in
the powerpoint that states that Adaptive Subfile is also available
for Exchange.  I've searched through the TDP for Exchange
documentation and can find no reference.  Was this just a typo, a
deleted feature or what?

Regards and felicitations

Steve.

Steven Harris

AIX and TSM Administrator,
Brisbane Australia
[EMAIL PROTECTED]


Missed admin schedule

2005-12-27 Thread Steven Harris

Hi again,

I have an Windows 2003 TSM server at 5.3.0.2 with a single admin
schedule.  Its the new type of schedule, created by the ISC to run
housekeeping.

The schedule refuses to run, showing as "missed" every day.  There is
nothing in the actlog for this schedule either at the scheduled time
or at the time that it is marked "missed".  There is no other admin
schedule that could conflict with this, although there is a client
schedule that kicks off at the same time each day.

I can run a one time, old style schedule (running the same script) to
recover from the problem - this has no issues whatsoever.

Has anyone seen this before?


Regards

Steve.

Steven Harris

AIX and TSM Administrator
Brisbane, Australia
[EMAIL PROTECTED]


Re: Adaptive Subfile Backup and File Servers

2005-12-28 Thread Steven Harris
ss 
> pointed out 
> > that the Adaptive subfile whitepaper suggests that its use is not 
> > indicated for file servers.  The reason given for this is that ASB 
> > cannot restore windows acls.  Now on this file server all 
> of the .pst 
> > files that I looked at inherited their permissions from 
> their parents 
> > so, I'm thinking that for the average Joe user, this 
> limitatioon would 
> > not be an issue. All restores on this system would normally 
> be done by 
> > the administrator anyway.
> >
> > So, is the recommendation in the white paper correct?  Do 
> any of you 
> > use Adaptive Subfile with .pst files on file servers and if so have 
> > there been any issues?
> >
> > Also two supplementary questions.  The next largest category  of 
> > backups on this server is access databases.  Again the whitepaper 
> > recommends against using adaptive subfile on these.  Might 
> I ask the 
> > same questions for .mdb files?
> >
> > Finally, I was looking at a Dave Canaan slide show from a 
> 2003 Share 
> > conference where he explained how ASB works.  There is one 
> line in the 
> > powerpoint that states that Adaptive Subfile is also available for 
> > Exchange.  I've searched through the TDP for Exchange documentation 
> > and can find no reference.  Was this just a typo, a deleted 
> feature or 
> > what?
> >
> > Regards and felicitations
> >
> > Steve.
> >
> > Steven Harris
> >
> > AIX and TSM Administrator,
> > Brisbane Australia
> > [EMAIL PROTECTED]
> 
> 


Re: TSM Reporting tools

2005-12-28 Thread Steven Harris
Hi Allen

I was going to put in a plug for R too, but you beat me to the punch.

I like your graph, would it be too much to ask for you to explain your
charging scheme?  That is always a vexed issue particularly with those
managers who make unreasonable volume or retention demands, and giving them
what they want, but charging them what it really costs for the service is a
nice stick to be able to beat them with.

Regards

Steve

Steven Harris
AIX and TSM Admin
Brisbane, Australia

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Allen S. Rout
> Sent: Thursday, 29 December 2005 8:14 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] TSM Reporting tools
>
>
> >> On Wed, 28 Dec 2005 12:43:38 -0600, Roger Deschner
> <[EMAIL PROTECTED]>
> >> said:
>
> R> A general-purpose statistical package is priceless, either
> instead of
> R> or in addition to the TSM-specific reporting tools. SPSS,
> SAS, or one
> R> of their competitors.
>
>
> >From the peanut gallery, I'll put in a plug for 'R'
>
> http://www.r-project.org/
>
> which is like S, except that it's free, and open-source.
>
>
> I've found it to be fantastic for graphing my TSM clients' behavior.
>
> http://open-systems.ufl.edu/reports/tsm/2004/erp/FIPRD-all.png
>
>
> - Allen S. Rout
>
>
>


Re: TSM Reporting tools

2005-12-28 Thread Steven Harris

Astounding,

Amusing,

and Educational.

I'll ask more often!

Many thanks


Steve.


On 29/12/2005, at 1:57 PM, Allen S. Rout wrote:


On Thu, 29 Dec 2005 10:53:49 +1000, Steven Harris
<[EMAIL PROTECTED]> said:




I like your graph, would it be too much to ask for you to explain
your
charging scheme?  That is always a vexed issue particularly with
those
managers who make unreasonable volume or retention demands, and
giving them
what they want, but charging them what it really costs for the
service is a
nice stick to be able to beat them with.



Not at all; I can hold forth on this at some length. (shocked, I
know you are)


Billing is always an exercise in shared fantasy.  Pretending that
this is not
so will only frustrate you.  You have to start with a set of
principles which
are politically acceptable, and then do rigorous math from those
principles,
ignoring that they may be insane from an engineering standpoint.

My organization is (was) a so-called "Auxilliary of the State of
Florida".  We
were mostly a State agency, but we had special accounting rules
that permitted
us to have bank accounts and retain money across years.  This was
because, as
a 'data center', we were expected to regularly make purchases which
were far
in excess of a given year's budget.  We were also permitted to bill
other
state agencies for our services, in real dollars, which we put into
banks, and
periodically bought new mainframes.

My politically-acceptable fantasy principles were as follows:

+ Recover costs
+ Recover enough that the service can be prepared for growth
+ Don't recover more than that

What are the costs?  We had a director-level employee whose entire
writ was
answering that question; the cost evaluation experience was
fascinating and
educational, and not a litle painful.  Your organization will have
its' own
standards for things like amortization, measuring benefits overhead
for staff
time, redistribution of front-office costs, cubage in the machine
room, power
and air handling, etc. etc.  But we have ours, and at the end of
the day
(koff-months-koff) it came down to a bottom line of our annualized
costs.

These principles are insane.  Most of our computer equipment is
amortized at 4
years.  For the library chassis, we're at 8 and counting.  For the
CPUs, we're
lucky if we make it to 3.  Disk is all over the map, and how do you
note that
my TSM disk is often cast-offs from another project?  Insane.  But
it matches
the principles, so soldier on.


That was the number I was supposed to recover.  Now, how should I
split it up?
I started out taking every measurement I could get TSM to cough up
(and we all
know that we can measure it MANY ways) and trying to assign numbers
to them
all.  This was hideously complex, and incomprehensible even to the
architect.

I went through several iterations of simplification, and finally
realized that
I had the wrong end of it: The metrics for charging must be easy to
measure,
but that's only necessary, not sufficient.  The important measures
are ones
over which the clients, the end-users, have direct control.

They don't care about tapes, they don't need to know when I move
from 3590-J
to K, or B to E, or to 3592s.  And if I bill them for frational-
tapes, then
they will be aware when I change underlying system structure.  Ew.

I ended up with basically two numbers: Transfer, and storage.
Transfer is
upload (backup, archive) and download (restore, retrieve), as
measured by the
acctlogs.  Storage is whatever Q occ comes up with (or actually a
select, but
you get the idea).


Pleasantly, I'm a pack rat, and retain logs and accounting logs.
Every time I
came up with a new set of rates, I could re-apply them to the last
[period] of
data.  This let me start twiddling rates, seeing how I could weight
billing
towards one or the other.

I decided that I would start off aiming for total storage charges
to be about
equal to total transfer charges; that is, when I add up a years'
bills, about
half of it should be in each category.  As time has passed and
costs have gone
down, I've nudged one or the other rate, mostly focusing on how I'd be
changing user behavior.  If people are keeping too much stuff
around (I had
one customer seriously claim they wanted 40 copies) then the
transfer cost
goes down instead of the storage.


I presented the completed charge algorithm to management, and it
passed
muster.  I presented it to customers, and they screamed.  We
changed the
inputs.  Less fantasy FTE, different fantasy purchasing behavior, less
theoretical total cost.  This is where I realized I was dealing
with a matter
of shared fantasy instead of engineering.


We went through several adjustments along these lines, and ended up
with a
rate that my customers absolutely love, at least those of them who
have a
grasp of the differences between TSM and running dump against a
local tape
drive.  From the 

Re: Missed admin schedule

2005-12-29 Thread Steven Harris
Thanks for the replies Andy and Duane

A restart of the TSM Server has cleared this problem.  D*mn Gremlins again.

Regards

Steve.

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Ochs, Duane
> Sent: Thursday, 29 December 2005 2:20 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Missed admin schedule
>
>
> Steve,
> Sounds like you may have created a client schedule instead of
> an Admin schedule.
>
> Log on to ISC, select the Server Maintenance link, Select the
> TSM server you wish to modify, Select Server Properties from
> the drop down box then select GO. Select the Administrative
> Schedules link. Then add a new schedule.
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Andrew Raibeck
> Sent: Wednesday, December 28, 2005 10:15 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Missed admin schedule
>
>
> Steve,
>
> Not sure I can provide a definitive answer, but seeing the
> actlog for the schedule window and the schedule definition
> itself (format=detailed) would certainly provide more insight.
>
> Regards,
>
> Andy
>
> Andy Raibeck
> IBM Software Group
> Tivoli Storage Manager Client Development
> Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
> Internet e-mail: [EMAIL PROTECTED]
>
> IBM Tivoli Storage Manager support web page:
> http://www-306.ibm.com/software/sysmgmt/products/support/IBMTi
voliStorag
eManager.html


The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.

"ADSM: Dist Stor Manager"  wrote on 2005-12-27
19:05:18:

> Hi again,
>
> I have an Windows 2003 TSM server at 5.3.0.2 with a single admin
> schedule.  Its the new type of schedule, created by the ISC to run
> housekeeping.
>
> The schedule refuses to run, showing as "missed" every day.  There is
> nothing in the actlog for this schedule either at the scheduled time
> or at the time that it is marked "missed".  There is no other admin
> schedule that could conflict with this, although there is a client
> schedule that kicks off at the same time each day.
>
> I can run a one time, old style schedule (running the same script) to
> recover from the problem - this has no issues whatsoever.
>
> Has anyone seen this before?
>
>
> Regards
>
> Steve.
>
> Steven Harris
>
> AIX and TSM Administrator
> Brisbane, Australia
> [EMAIL PROTECTED]


Re: Running tapes off-site multiple times in a day?

2006-01-05 Thread Steven Harris
If its mostly the SAP data that you are interested in... Here are a few
off-the-wall suggestions

Suggestion 1.

The SAP TDP can back up to two TSM Servers at once.  Set up a second server
and run its housekeeping 8 or 12 hours opposed to the first one.

Suggestion 2.

Use simultaneous write so your copypools are ready as soon as the backup is
done.

Suggestion 3.

Backup your storagepool to two copypools.  Sync one for the first offsite
run and the the second for the other offsite run.

As I say, off-the-wall (well 1 and 3 anyway) but maybe there's something you
can use there.

Regards

Steve

Steve Harris
AIX and TSM Admin
Queensland Australia

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
> On Behalf Of Nicholas Cassimatis
> Sent: Friday, 6 January 2006 3:14 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Running tapes off-site multiple times in a day?
> 
> 
> How well segregated is your data?  I'd advise having the SAP 
> data and logs in their own pools.  Since this type of data 
> tends to expire in groups, do you need to reclaim the tapes 
> in these pools at all?
> 
> The hardest part, with a system like this, is getting the 
> tapes idle to be able to run "backup stg" against.  You may 
> want to look at having the data written to both the primary 
> and copypools at the same time, or doing something to idle 
> the tapes you need to copy (mark them ReadOnly to force new 
> tapes to be used, something like that).  How busy your system 
> is makes this harder.
> 
> Nick Cassimatis
> 
> - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 
> 01/05/2006 12:04 PM
> -
> 
> "ADSM: Dist Stor Manager"  wrote on 
> 01/05/2006 11:31:11 AM:
> 
> > We'll be adding 6 more drives in a few weeks -- to a total of 16. I 
> > have 6 off-site copy pools, one of which never goes through 
> reclaim as 
> > it's a 21-day archive pool. I'm most concerned about the 
> process for 
> > wrapping up an in-process reclaim and getting the tape out of the 
> > drive by the time the afternoon database backup finishes (I do full 
> > backups to LTO -- currently runs about 20 minutes).
> >
> > I figure I'll need to open a reclaim window in the 2 AM to 
> 6 AM time 
> > frame in order to get everything done.
> >
> > And I'd REALLY like to spend the money and put in the 
> telecom circuit 
> > instead.
> >
> > Tom
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
> On Behalf 
> > Of John Monahan
> > Sent: Thursday, January 05, 2006 11:10 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: Running tapes off-site multiple times in a day?
> >
> > I'm thinking you're going to need to add tape drives before 
> thinking 
> > of doing something like this.  You are going to shorten 
> your current 
> > daily maintenance window because you have to squeeze in a 
> portion of 
> > it to do a second run.  Could be done if you can shorten 
> your current 
> > window enough,
> > but I would think that would mean adding tape drives.  Getting your
> > reclaims done is going to be a problem as well with only 
> single threaded
> > reclamation processes - if you currently run until 8pm 
> sometimes and you
> > are going to need to shorten that - will be tough unless 
> you break your
> > data up into more pools.
> >
> > Sounds awfully ugly to me.  I think the correct answer is yes it is 
> > technically possible, but do we REALLY need to do it this way.
> >
> >
> > __
> > John Monahan
> > Senior Consultant Enterprise Solutions
> > Computech Resources, Inc.
> > Office: 952-833-0930 ext 109
> > Cell: 952-221-6938
> > http://www.computechresources.com
> >
> >
> >
> >
> >  "Kauffman, Tom"
> >  <[EMAIL PROTECTED]
> >  COM>
> > To
> >  Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
> >  Dist Stor
> > cc
> >  Manager"
> >  <[EMAIL PROTECTED]
> > Subject
> >  .EDU> Running tapes 
> off-site multiple
> >times in a day?
> >
> >  01/05/2006 08:54
> >  AM
> >
> >
> >  Please respond to
> >  "ADSM: Dist Stor
> >  Manager"
> >  <[EMAIL PROTECTED]
> >.EDU>
> >
> >
> >
> >
> >
> >
> > Like I said -- the questions are crawling out of the wood-work.
> >
> > Our management wants to reduce the possible data loss in 
> the event of 
> > a disaster by taking copies off-site both in the early morning and 
> > again at the end of first shift.
> >
> > Is anyone else doing this? Will this be as painfull as I 
> expect it to 
> > be?
> >
> > I know I'll have issues with reclaims -- I currently run expire 
> > inventory at noon, with reclaims kicking in shortly thereafter and 
> > sometimes running to 8:00 PM, so I'll have to implement a reclaim 
> > window.
> >
> > Electronic off-siting isn't an option -- I'

Re: SQL Logs and Backup conflict

2006-01-10 Thread Steven Harris
Can you not just increase the size of the backup window on the hourly
backups?

They will just wait behind the daily backup and run when it finishes.  They
won't show as missed because they ran, eventually, within their window?
When there is no comflict, they will run as soon as they normally would,
assuming schedmode=prompted.

Regards

Steve

Steve Harris
AIX and TSM Admin
Brisbane Australia

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Zoltan Forray/AC/VCU
> Sent: Wednesday, 11 January 2006 6:57 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] SQL Logs and Backup conflict
>
>
> Same node name for both.  Multiple different schedules (for
> logs, diff-backups, os-backups, etc).
>
>
>
> Sung Y Lee <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" 
> 01/10/2006 03:43 PM Please respond to
> "ADSM: Dist Stor Manager" 
>
>
> To
> ADSM-L@VM.MARIST.EDU
> cc
>
> Subject
> Re: [ADSM-L] SQL Logs and Backup conflict
>
>
>
>
>
>
> clarification question.
>
> are you using a same node name for both TDP agent and regular
> client? Or you are not using any TDP and straight forward
> regular node name?
>
> Sung Y. Lee
>
> "ADSM: Dist Stor Manager"  wrote on
> 01/09/2006 09:15:08 AM:
>
> > We are looking for a creative solution (or answer) to a long
> > outstanding "issue" when it comes to SQL TDP backups.
> >
> > We have a major M$-SQL shared server.
> >
> > There is an HOURLY schedule to dump the transaction logs.
> >
> > When the daily backups kick off, the hourly transaction log backups
> > then proceed to fail, causing constant annoying "schedule
> has failed"
> messages,
> > until the DB backups finish.
> >
> > How do other folks handle this kind of conflict ?  Are we
> missing some
> TDP
> > backup option ?
> >
> > My prefered solution would be to have TSM scheduling be
> more flexible,
> > e.g. only run this schedule between 10am and 10pm.
> >
> > Your thoughts ?
>
>
>


Virtual Volumes on loopback.

2006-01-15 Thread Steven Harris
Hi All,

I have a client that needs monthly backupsets for his BA clients and exports
for his TDP clients.  The device for this is a stand alone LTO2 drive that
is separate from the main tape library (I'm eliding the detail here, trust
me, there are valid reasons why he want it done this way).  Accordingly I'd
like to try to stack multiple backupsets/exports onto a single tape.

Now, there used to be an idea that you could do this with server_to_server
communications that used the loopback interface on so that the source and
the destination servers are the same actual TSM instance.  There was also
some information that this didn't work particularly well at TSM 5.1 or 2.

Has anyone gotten this scheme to work at TSM for Windows 5.3?  OS is Windows
2003, but that _shouldn't_ make any difference.

Regards

Steve

Steve Harris
AIX and TSM Admin
Brisbane Australia 


Re: Upgrade mystery

2006-01-17 Thread Steven Harris
FWIW Del

I applied this change to my SAP nodes and lo-and-behold restore throughput
just about quadrupled.

But, I applied it to my exchange nodes and there was no discernable
difference. YMMV

Steve.



> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Del Hoobler
> Sent: Tuesday, 17 January 2006 11:47 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Upgrade mystery
>
>
> Patricia,
>
> See if this append I made last month helps at all:
>
>http://msgs.adsm.org/cgi-bin/get/adsm0512/204.html
>
> Thanks,
>
> Del
>
> 
>
> "ADSM: Dist Stor Manager"  wrote on
> 01/17/2006 08:38:42 AM:
>
> > I just upgraded my windows2k and w2k3 servers from tsm
> backup client
> > 5.2 to 5.3.0.8. These servers also run the tdp for exchange
> v5.2.1.0.
> >
> > Since then, my exchange backups take an hour longer to complete.
> >
> > Does anyone know of a conflict with the software?  Or a
> setting I need
> > to set??
>
>
>


Parallel execution woes

2006-01-17 Thread Steven Harris
Hi All

New function always seems to add new bugs, particularly with a product as
complex as TSM

I have a client with TSM 5.3.1.2 on a win2003 cluster.  There is one
maintenance_plan script that has been running well for over three months and
has evolved over time. After 10 days of normal TSM Server operation since
the last restart, suddenly we've started getting

01/15/2006 07:00:25  ANR0116W The server script MAINTENANCE_PLAN attempted
to
  start more parallel commands than are allowed for the
  server.  The server is limited to 256 parallel
commands.
  (SESSION: 8402)
01/15/2006 07:00:25  ANR1447W MAINTENANCE_PLAN: The server is not able to
begin
  a new thread to execute the command on line 5 in
  parallel. The command will run serially. (SESSION:
8402)

And everything runs sequentially instead of parallel.  I've seen IC46488,
but in this case every PARALLEL command has a corresponding SERIAL. In no
case is there an attempt to start more than 16 commands at once, and there
are no outstanding processes, sessions or whatever visible.

A server restart has fixed the problem for now, but I don't want it
recurring - I have reported this to IBM but would like to know if anyone
else has an insight they'd like to share.

Steve

Steve Harris
AIX and TSM Admin
Brisbane Australia


Re: Full backup direct from client server

2006-02-05 Thread Steven Harris

Paul,

The licencing is by cpu, so you can run as many nodes on the same  
machine as you want and still be within your licence terms.
The client licences that you register on the server haven't caught up  
with the actual licencing scheme since it was changed a couple of  
years back.  Just run a reg lic command to register as many licences  
as you need. Its an honour system.


Regards

Steve

Steven Harris

AIX and TSM Administrator
Brisbane, Australia

On 06/02/2006, at 11:22 AM, Paul Dudley wrote:

If I understand you correctly you are suggesting creating an extra  
node
client on the TSM server. We are already at the limit of client  
licenses

so this is not possible at the moment.

Regards
Paul



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf Of Francisco Molero
Sent: Sunday, 5 February 2006 9:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Full backup direct from client server

Hi Paul,

you can define another node client in your TSM client
and define a copygroup with backup mode absolute. Then
you need to define a scheduler against this node. So,
you have a full backup.


 --- Paul Dudley <[EMAIL PROTECTED]> escribió:


Is it possible to schedule a full client server
backup onto tape where
the data comes direct from the client server? From
my understanding of
the gen backupset command it appears that it grabs
the server data from
existing backups already on tape. I would prefer to
get the data direct
from the client server as I have had a lot of
problems with the gen
backupset command.

Thanks in advance
Paul



Paul Dudley
ANL IT Operations Dept.
ANL Container Line
[EMAIL PROTECTED]





ANL DISCLAIMER

This e-mail and any file attached is confidential,
and intended solely to the named addressees. Any
unauthorised dissemination or use is strictly
prohibited. If you received this e-mail in error,
please immediately notify the sender by return
e-mail from your system. Please do not copy, use or
make reference to it for any purpose, or disclose
its contents to any person.






__
LLama Gratis a cualquier PC del Mundo.
Llamadas a fijos y móviles desde 1 céntimo por minuto.
http://es.voice.yahoo.com





ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended  
solely to the named addressees. Any unauthorised dissemination or  
use is strictly prohibited. If you received this e-mail in error,  
please immediately notify the sender by return e-mail from your  
system. Please do not copy, use or make reference to it for any  
purpose, or disclose its contents to any person.





  1   2   3   4   >