Re: What if (NDMP)

2012-02-22 Thread Paul Fielding
I'm not directly familiar with the NDMP backup expiration scenario, but
what you're describing makes sense to me.  If the full is older than what
the copy group is set to, then it seems reasonable that you can only
restore, in it's own right, the backups which have not yet been expired -
ie. a diff that hasn't yet expired - and then that diff grabs the older
expired full because it requires it.   To me that would fall within the
realm of working as designed.If you want to be able to restore a full
backup beyond the age you've set in the copy group, then one should
probably just increase the number in the copy group...

regards,

Paul


On Wed, Feb 22, 2012 at 1:47 PM, Mueller, Ken kmuel...@mcarta.com wrote:

 This matches what I see in our environment - q nasbackup doesn't show the
 full backup that the differentials depend upon, once the full backup ages
 out.  We have successfully restored volumes using a recent differential
 with its older associated full backup so it does work.  However, your use
 of the word expired to describe the status of the full backup caught my
 eye because it implies that while we can restore full + differential, we
 can't restore the full by itself.  Is that indeed the case?  (Never tried
 it.)  It makes sense in terms of following the rules of retention, but its
 an interesting scenario where I've got it, but I can't give it to you
 unless you buy the package deal!
 -Ken
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Mark Haye
 Sent: Wednesday, February 22, 2012 2:27 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: What if (NDMP)


 A full backup that has a dependent differential can be expired, but will
 not be deleted.  This means that a full backup that is beyond the retention
 criteria will not be visible, but will be available for restoring
 differentials.  As others have noted, fulls and differentials are managed
 by the same management class, so each will appear as just another version.

 In Wanda's example,

  The question is, if retextra is 15 and retonly is 15, and you take one
 full NDMP backup followed by 20 diffs, does anything roll off?
  How many fulls and diffs do you have left in the DB?

 The full is expired, but not deleted.  The first five differentials are
 expired and deleted. You will have 15 restorable backup versions.  All
 versions happen to be differentials, but the full is still there, ready to
 go when you want to restore one of the differentials.

 In David's example,

  1)  [management class/copygroup] with retonly=15 and retextra=15
  2)  it received data from a backup node (NDMP) process
  3)  the NDMP runs a full backup once every six months
  4)  the NDMP run an incremental monthly on the months a full is not
 run

 Again, you will have 15 restorable backup versions.  Each version might be
 a full or might be a differential.  The oldest versions might be
 differentials with no visible full, but the full is still available.

 Mark Haye (马克海), IBM TSM Server Development, mark.h...@us.ibm.com,
 8/321-4403, (520)799-4403, 0N6/9062-2, Tucson Professional programmer.
  Closed source.  Do not attempt.


 ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 02/22/2012
 09:59:23 AM:

  From: Prather, Wanda wprat...@icfi.com
  To: ADSM-L@vm.marist.edu
  Date: 02/22/2012 10:10 AM
  Subject: Re: [ADSM-L] What if (NDMP)
  Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
 
  I'm glad David asked this question, because I have the same one, as I
  have been digging around in the backups table trying to figure out
  what goes on.
 
  The question is, if retextra is 15 and retonly is 15, and you take one
  full NDMP backup followed by 20 diffs, does anything roll off? How
  many fulls and diffs do you have left in the DB?
 
  Does the retextra/retonly apply just to the fulls, or just to the
  diffs? Both? How?
 
  Wanda
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
  Of Christian Svensson
  Sent: Wednesday, February 22, 2012 10:27 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: [ADSM-L] SV: What if
 
  Hi,
  The Full Backup and Inc Backup are the same object for TSM.
 
  That mean if you backup the Full Backup with Managment Class A then
  backup Incremental with MG Class B, TSM will then change the FULL
  backup to MG Class B.
 
  Best Regards
  Christian Svensson
 
  Cell: +46-70-325 1577
  E-mail: christian.svens...@cristie.se
  CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms
 
  Join us at Pulse 2012: http://www.ibm.com/pulse
 
  
  Från: Ehresman,David E. [deehr...@louisville.edu]
  Skickat: den 22 februari 2012 14:20
  Till: ADSM-L@VM.MARIST.EDU
  Ämne: What if
 
  What if there were a
 
  1)  storage pool with retonly=15 and retextra=15
 
  2)  it received data from a backup node (NDMP) process
 
  3)  the NDMP runs a full backup once every six months
 
  

Re: TSM for VE

2012-02-14 Thread Paul Fielding
Well, at least not unless the guest has an FC adapter direct attached to
it.  :)  But then that would probably moot part of the advantage of using a
VM... :)


On Tue, Feb 14, 2012 at 1:22 PM, Del Hoobler hoob...@us.ibm.com wrote:

 Robert,

 You can not perform LAN-free data movement from inside a guest.

 Thanks,

 Del

 

 ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 02/13/2012
 06:59:01 AM:

  From: Robert Ouzen rou...@univ.haifa.ac.il
  To: ADSM-L@vm.marist.edu
  Date: 02/13/2012 07:04 AM
  Subject: TSM for VE
  Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
 
  Hi to all
 
  We just implant TSM for VE  for testing and I have few questions.
 
  Our environment is:
 
 
  · Tsm server version 6.2.3.0 on Windows2008R2 64Bit
 
  · Proxy server as Virtual machine with O.S Windows2008R2 64bit
 
  o   B/A client V6.3.0.0
 
  o   Tsm for VE V6.3.0.0
 
 
  Made all the configuration required and run  backups and restores
  successfully thru the VMCLI too.
 
  Want to try now Lanfree backup even if the Proxy server is VM
  machine , anybody already did it ?
 
  Any suggestions , tips ?
 
  Regards Robert
 



Re: checkout libvol on 3584 - says slots are full

2011-08-31 Thread Paul Fielding
A completely off-topic side note - when checking in tapes from your i/o door
that you're not certain of status, I would recommend running your two
checkins in the opposite order - first run a checkin with a status of
'scratch', then run with a status of 'private'.

TSM will happily check scratch tapes in as private, which will then prevent
you from using them if you just use a common scratch pool.  It will not,
however, let you checkin private tapes as scratch.   So by doing the
scratch checkin first, you'll get the tapes that really are scratch,
checked in as scratch, and it will barf on the private tapes.  You can then
checkin the 'private' tapes, and they'll be checked in appropriately...

Paul


On Wed, Aug 31, 2011 at 7:01 AM, Richard Rhodes rrho...@firstenergycorp.com
 wrote:

 We figured out what was wrong, but no idea why/how.

 The 3584 is running with virtualization, so the logical lib has virtual
 I/O slots.  These virtual I/O slots were full and prevented the checkout
 from working.

 Previously in trying to figure out this problem  I had found a doc on
 IBM's web site that talked about full virtual I/O slots and that the
 solution was to run a checkin.  At that time we didn't know the virtual
 slots were full, but I ran throught the  procedure and ran a checkin -
 nothing came in (first a private checkin, then a scratch checkin).  The
 problem persisted so I figured the virtual slots were empty.  I had tried
 many things: multiple checkout/checkin cmds with various parms, running an
 inventory, bouncing the library manager tsm instance . . . nothing worked.

 Another team member took a q libvol and compared it against the volumes
 the Specialist GUI said the library had.  There was a discrepancy - the
 lib had vols that tsm didn't know about.  She saw that the element
 addresses of these volumes were of the virtual I/O slots (from logical lib
 details).  TSM did not know about these tapes and could not  check them
 in.  It's like they were in a limbo/stranded state of some kind.  She used
 the Specialist gui to remove the tapes from lib.  After this the checkout
 we were trying to perform worked as expected.

 This was very strange.

 Rick



 From:   Baker, Jane jane.ba...@clarks.com
 To: ADSM-L@VM.MARIST.EDU
 Date:   08/26/2011 08:40 AM
 Subject:Re: checkout libvol on 3584 - says slots are full
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 Hiya -

 Is the cover to the i/o station clicked shut, maybe it thinks this is
 open?

 Or library could have gone out of sync with TSM.  We've had this
 recently.

 We fixed it by:

 Shutdown TSM servers (library manager  client).
 Force inventory on library in question by opening and shutting door,
 rescans barcodes.
 Startup TSM library manager, then library client.
 Run audit library on all virtual libraries in question.

 This then worked ok for us, but might be worth checking the i/o station
 door is clicked shut!

 Hope you get it fixed, sounds like an annoying one!

 Jane.



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@vm.marist.edu] On Behalf Of
 Richard Rhodes
 Sent: 25 August 2011 18:57
 To: ADSM-L@vm.marist.edu
 Subject: Re: [ADSM-L] checkout libvol on 3584 - says slots are full

 tried that a bunch of times.  This is so frustrating!

 oh well . . . .

 Rick




 From:   Ben Bullock bbull...@bcidaho.com
 To: ADSM-L@VM.MARIST.EDU
 Date:   08/25/2011 01:49 PM
 Subject:Re: checkout libvol on 3584 - says slots are full
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 Could be the sensor for the IO slot still thinks the door is open. Might
 try opening it and closing it to see if it clears.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Richard Rhodes
 Sent: Thursday, August 25, 2011 10:36 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] checkout libvol on 3584 - says slots are full

 I'm trying to checkout of a 3584 a bunch of tapes. THe 3584 has one
 logical library.

 I'm issuing cmd:

  checkout libvol 3584go J04432 remove=bulk checklabel=no
   (origionally had vollist=a,b,c,etc)

 It's failing with q request:

  ANR8352I Requests outstanding:
  ANR8387I 026: All entry/exit ports of library 3584GO are full or
 inaccessible.
  Empty the entry/exit ports, close the entry/exit port door, and make
 the ports accessible.

 Anything I try doesn't change this, other than canceling the request.

 The door cap slots are empty.
 I ran a checkin libvol 3584go search=yes status=scratch label=barcode
 and it find no volumes to checkin, so the virtual slots are empty also.

 I'm stumped . . . cap door slots are empty and there isn't anything in
 the virtual cap slots.

 Any help is appreciated!

 Rick


 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If the
 reader of this message is not 

Re: Unable to Retrieve Data from Tapes achieved using TSM

2011-08-02 Thread Paul Fielding
Further to that (this is of course assuming that the volumes you want are
not in the library, as opposed to something else going on).

Once you've checked in the needed volumes, do a q vol [volumename]
format=detailed for each volume and check that the Access is set to
READWRITE.   Because you tried to access the volumes but they weren't in the
library, it's possible that TSM will have changed their Access to
UNAVAILABLE.If it did, this won't get reset just by checking the volumes
back in, you'll have to manually reset it with update vol [volumename]
access=readwrite.

regards,

Paul


On Tue, Aug 2, 2011 at 9:27 AM, Huebner,Andy,FORT WORTH,IT 
andy.hueb...@alconlabs.com wrote:

 Your activity log should provide the name of the volumes that are needed
 for the retrieve.  Once you have the volume names you will need to check
 them into the library as private.

 Look up Checkin libvolume

 Andy Huebner

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 pharthiphan
 Sent: Tuesday, August 02, 2011 4:46 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Unable to Retrieve Data from Tapes achieved using TSM

 Unable to Retrieve Data from Tapes achieved using TSM

 dsmc retrieve /gpfs2/\* /gpfs2_new/
 -optfile=/usr/tivoli/tsm/client/ba/bin/dsm_gpfs.opt -subdir=yes
 -desc=Archive-250311-Ashish-Monthend backup

 ** Interrupted **
 ANS1114I Waiting for mount of offline media.
 ** Interrupted **
 ANS1114I Waiting for mount of offline media.
 ** Interrupted **
 ANS1114I Waiting for mount of offline media.
 ** Interrupted **
 ANS1114I Waiting for mount of offline media.
 ** Interrupted **
 ANS1114I Waiting for mount of offline media.
 ** Interrupted **
 ANS1114I Waiting for mount of offline media.
 ** Interrupted **
 ANS1114I Waiting for mount of offline media.
 ANS4035W File
 '/gpfs2/TSMArchive/ashish/Apr_IC2/T62/y1969e03/pgbf1969090700' currently
 unavailable on server.

 ANS4035W File
 '/gpfs2/TSMArchive/ashish/Apr_IC2/T62/y1969e03/pgbf1969090800' currently
 unavailable on server.

 ANS4035W File
 '/gpfs2/TSMArchive/ashish/Apr_IC2/T62/y1969e03/pgbf1969091000' currently
 unavailable on server.

 ANS4035W File
 '/gpfs2/TSMArchive/ashish/Apr_IC2/T62/y1969e03/pgbf1969091100' currently
 unavailable on server.

 ANS4035W File '/gpfs2/TSMArchive/ashish/Apr_IC2/T62/y1969e03/
 time_mean.19691027.nc' currently unavailable on server.

 ANS4035W File '/gpfs2/TSMArchive/ashish/Apr_IC2/T62/y1969e03/
 time_mean.19691028.nc' currently unavailable on server.

 ANS4035W File '/gpfs2/TSMArchive/ashish/Apr_IC2/T62/y1969e03/
 time_mean.19691029.nc' currently unavailable o

 Please help I m New to TSM

 +--
 |This was sent by pharthip...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--

 This e-mail (including any attachments) is confidential and may be legally
 privileged. If you are not an intended recipient or an authorized
 representative of an intended recipient, you are prohibited from using,
 copying or distributing the information in this e-mail or its attachments.
 If you have received this e-mail in error, please notify the sender
 immediately by return e-mail and delete all copies of this message and any
 attachments.

 Thank you.



Re: DB backup expiration

2011-08-01 Thread Paul Fielding
Most places I know simply script the del volhost todate=-3 t=dbb
and/or t=dbs as appropriate, into their regular daily processing.

Indeed as mentioned above, regular database media is not considered
safe to delete until it's known to be out of the library an presumably
offsite, so in your case it'd be easiest to just do the del volhist on
a daily basis...

Sent from my iPhone

On Aug 1, 2011, at 2:22 PM, Erwann SIMON erwann.si...@free.fr wrote:

 Hi Thomas,

 Virtual volumes are considered remote by DRM as soon as they're created, so 
 the expiration of DB backup series expiration days is immediatly taken into 
 account whereas real volumes (tapes or files) needs to be in a vault 
 state by DRM.

 So, you'll need to run move drmedia (with remove=no eventually) or delete 
 volhist as you already do.


 --
 Best regards / Cordialement / مع تحياتي
 Erwann SIMON

 -Original Message-
 From: Thomas Denier thomas.den...@jeffersonhospital.org
 Sender: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 Date: Mon, 1 Aug 2011 16:01:16
 To: ADSM-L@VM.MARIST.EDU
 Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] DB backup expiration

 We have two TSM 6.2.2.0 servers running under mainframe Linux. Both
 report 'DB Backup Series Expiration Days: 3 Day(s)' when I execute
 'query drmstat' commands.

 One of the systems is configured as a library manager. It performs
 a database snapshot and a full database backup daily. These are
 written to virtual volumes on two different servers. Both types
 of database backups are aging off in somewhere between 3 and 4
 days.

 The other system is a library manager client of the first one. It
 also performs a database snapshot and a full database backup daily.
 The snapshot is written to tape, with the library manager above
 managing the tape mounts. The tape drives are in a different building,
 so we never run 'move drmedia' commands to control movement of the
 database snapshots or storage pool volumes for this server. The full
 backup is written to a file device class. As far as I can tell,
 neither type of database backup ever ages off on its own. I have to
 run 'delete volhist' commands occasionally to recover the media
 occupied by old database backups. In some cases these commands have
 removed database backups that were as much as several weeks old.

 Where should I be looking for the reason for the difference in
 behavior between the two servers?


Re: Active logs taking 4 days to delete in 6.2

2011-07-27 Thread Paul Fielding
Well, I don't think you're showing any ignorance at all, in fact I believe
you nailed it on the head.  I should have thought of that.  (btw this server
is on AIX).  For some reason I seemed to think that my active log filesystem
was originally less full, but in fact now that you mention it I may have
been thinking about the archive log fs rather than the active log fs.
 That's a dummy one on my part.

When I go back and look indeed the ACTIVELOGSIZE is set to 40GB, which which
does line up with how much space is used in the active log filesystem.   So
all that being said, it appears it's all working as designed.  Apparently I
was asking a dummy question myself.  ;)

So, this brings me to a tangent on the active vs. archive logs.  From what I
can see, it looks like the current Active log in that filesystem is almost
always right at the top of the list.   Given that the active logs are
supposed to be (I thought) for uncommitted transactions, and once an active
log fills it gets dumped to the archive logs, why on earth do we need such a
large active log filesystem?   Does anyone actually have a TSM server that
manages to write out anywhere from 40-100GB of active logs prior to them
getting dumped out to the archive log fs?

regards,

Paul


On Wed, Jul 27, 2011 at 8:07 AM, Prather, Wanda wprat...@icfi.com wrote:

 Well, I'm going to weigh in here and show my ignorance.

 Somewhere I missed what platform you're on, but on my TSM 6.2 server on
 Win2K3, I don't think the active logs files ever get deleted.  I think there
 are always enough log files to account for the ACTIVELOGSIZE specified in
 dsmserv.opt.

 My understanding is that when all the transactions in an active log file
 are committed, that active log file is eligible to be copied to the
 archivelog directory, but the active log file doesn't go away.  It just sits
 there, and when DB2 has used all the other log files in a round-robin sort
 of way, it will cycle back and rename the oldest file and reuse it.

 In my case ACTIVELOGSIZE is 90G, and depending on what's going on with the
 TSM server I may have activelog files with timestamps going back 2 days or 2
 weeks, but it's still 90G.

 The archive log files DO get deleted.

 W

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Paul Fielding
 Sent: Tuesday, July 26, 2011 8:50 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Active logs taking 4 days to delete in 6.2

 Hi Kurt,

 Ok, this is interesting stuff.  When I do db2pd -db tsmdb1 -logs, I see the
 following:

 Database Partition 0 -- Database TSMDB1 -- Active -- Up 77 days 15:59:39 --
 Date 07/26/2011 06:41:01

 Logs:
 Current Log Number1327
 Pages Written 124336
 Cur Commit Disk Log Reads 0
 Cur Commit Total Log Reads8
 Method 1 Archive Status   Success
 Method 1 Next Log to Archive  1327
 Method 1 First Failuren/a
 Method 2 Archive Status   n/a
 Method 2 Next Log to Archive  n/a
 Method 2 First Failuren/a
 Log Chain ID  0
 Current LSN   0x00A6009B090A

 AddressStartLSN State  Size   Pages
  Filename
 0x0A00010021E4C4F0 00A5C2400010 0x 131072 131072
 S0001326.LOG
 0x0A00010021E46C70 00A5E2400010 0x 131072 131072
 S0001327.LOG
 .
 [cut out a whole bunch of log files for brevity] .
 0x0A00010021E55B10 00AF82400010 0x 131072 131072
 S0001404.LOG
 0x0A00010021E5D0D0 00AFA2400010 0x 131072 131072
 S0001405.LOG

 As you can see, the current log, according to DB2, is 1326, but it has
 created logfiles all the way up to 1405 already.  There's 81 logfiles,
 taking up 40GB of space.  1326 is roughly 4 days old.  It appears that TSM
 is currently creating it's logfiles roughly 4 days in advance of actually
 writing to it.

 Anyone got ideas as to why this behavior, should I leave it this way, or is
 there anything I can do to change it?

 regards,

 Paul







 On Tue, Jul 26, 2011 at 4:20 AM, BEYERS Kurt kurt.bey...@vrt.be wrote:

  The active log space is preallocated at the file system level, you can
  check the current log file (1504)  in use as follows:
 
  $ db2pd -db tsmdb1 -logs
 
  Database Partition 0 -- Database TSMDB1 -- Active -- Up 6 days
  01:53:15 -- Date 07/26/2011 12:16:27
 
  Logs:
  Current Log Number1504
  Pages Written 984
  Cur Commit Disk Log Reads 0
  Cur Commit Total Log Reads0
  Method 1 Archive Status   Success
  Method 1 Next Log to Archive  1504
  Method 1 First Failuren/a
  Method 2 Archive Status   n/a
  Method 2 Next Log to Archive  n/a
  Method 2 First Failuren/a
  Log Chain ID  6
  Current LSN   0x00BC027D8E23
 
  $ ls
  S0001503.LOG  S0001508.LOG  S0001513.LOG  S0001518.LOG  S0001523.LOG
  S0001528.LOG  S0001533.LOG  S0001538.LOG  S0001543.LOG  S0001548.LOG
  S0001553.LOG  S0001558

Re: Active logs taking 4 days to delete in 6.2

2011-07-26 Thread Paul Fielding
Hi Kurt,

Ok, this is interesting stuff.  When I do db2pd -db tsmdb1 -logs, I see the
following:

Database Partition 0 -- Database TSMDB1 -- Active -- Up 77 days 15:59:39 --
Date 07/26/2011 06:41:01

Logs:
Current Log Number1327
Pages Written 124336
Cur Commit Disk Log Reads 0
Cur Commit Total Log Reads8
Method 1 Archive Status   Success
Method 1 Next Log to Archive  1327
Method 1 First Failuren/a
Method 2 Archive Status   n/a
Method 2 Next Log to Archive  n/a
Method 2 First Failuren/a
Log Chain ID  0
Current LSN   0x00A6009B090A

AddressStartLSN State  Size   Pages
 Filename
0x0A00010021E4C4F0 00A5C2400010 0x 131072 131072
S0001326.LOG
0x0A00010021E46C70 00A5E2400010 0x 131072 131072
S0001327.LOG
.
[cut out a whole bunch of log files for brevity]
.
0x0A00010021E55B10 00AF82400010 0x 131072 131072
S0001404.LOG
0x0A00010021E5D0D0 00AFA2400010 0x 131072 131072
S0001405.LOG

As you can see, the current log, according to DB2, is 1326, but it has
created logfiles all the way up to 1405 already.  There's 81 logfiles,
taking up 40GB of space.  1326 is roughly 4 days old.  It appears that TSM
is currently creating it's logfiles roughly 4 days in advance of actually
writing to it.

Anyone got ideas as to why this behavior, should I leave it this way, or is
there anything I can do to change it?

regards,

Paul







On Tue, Jul 26, 2011 at 4:20 AM, BEYERS Kurt kurt.bey...@vrt.be wrote:

 The active log space is preallocated at the file system level, you can
 check the current log file (1504)  in use as follows:

 $ db2pd -db tsmdb1 -logs

 Database Partition 0 -- Database TSMDB1 -- Active -- Up 6 days 01:53:15 --
 Date 07/26/2011 12:16:27

 Logs:
 Current Log Number1504
 Pages Written 984
 Cur Commit Disk Log Reads 0
 Cur Commit Total Log Reads0
 Method 1 Archive Status   Success
 Method 1 Next Log to Archive  1504
 Method 1 First Failuren/a
 Method 2 Archive Status   n/a
 Method 2 Next Log to Archive  n/a
 Method 2 First Failuren/a
 Log Chain ID  6
 Current LSN   0x00BC027D8E23

 $ ls
 S0001503.LOG  S0001508.LOG  S0001513.LOG  S0001518.LOG  S0001523.LOG
  S0001528.LOG  S0001533.LOG  S0001538.LOG  S0001543.LOG  S0001548.LOG
  S0001553.LOG  S0001558.LOG
 S0001504.LOG  S0001509.LOG  S0001514.LOG  S0001519.LOG  S0001524.LOG
  S0001529.LOG  S0001534.LOG  S0001539.LOG  S0001544.LOG  S0001549.LOG
  S0001554.LOG  S0001559.LOG
 S0001505.LOG  S0001510.LOG  S0001515.LOG  S0001520.LOG  S0001525.LOG
  S0001530.LOG  S0001535.LOG  S0001540.LOG  S0001545.LOG  S0001550.LOG
  S0001555.LOG  S0001560.LOG
 S0001506.LOG  S0001511.LOG  S0001516.LOG  S0001521.LOG  S0001526.LOG
  S0001531.LOG  S0001536.LOG  S0001541.LOG  S0001546.LOG  S0001551.LOG
  S0001556.LOG  S0001561.LOG
 S0001507.LOG  S0001512.LOG  S0001517.LOG  S0001522.LOG  S0001527.LOG
  S0001532.LOG  S0001537.LOG  S0001542.LOG  S0001547.LOG  S0001552.LOG
  S0001557.LOG  SQLLPATH.TAG

 If it is full, a new log file is used. When a log file is no longer active
 (all sql statements are committed), it is archived:

 $ db2 get db cfg for tsmdb1 | grep LOGARCH
  First log archive method (LOGARCHMETH1) =
 DISK:/tsm1/archlog/archmeth1/

 So it works as DB2 is supposed to do.

 Best regards,
 Kurt


 -Oorspronkelijk bericht-
 Van: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Namens Paul
 Fielding
 Verzonden: maandag 25 juli 2011 15:20
 Aan: ADSM-L@VM.MARIST.EDU
 Onderwerp: Re: [ADSM-L] Active logs taking 4 days to delete in 6.2

 Yeah, I am.  The thing that's wierd is the 4 day delay.  Active logs are
 getting deleted, but they're waiting 4 days to do so.  And this is not how
 the behavior has always been since going to 6.2.  I just noticed the change
 one day.  Very bizarre...


 On Mon, Jul 25, 2011 at 6:59 AM, Zoltan Forray/AC/VCU zfor...@vcu.edu
 wrote:

  Are you doing backup volhist as well?  IIRC, there was a discussion
  that you needed to do that as well to purge activity logs.  Plus it is
  a requirement to perform DB restores on 6.x servers.
 
 
  Zoltan Forray
  TSM Software  Hardware Administrator
  Virginia Commonwealth University
  UCC/Office of Technology Services
  zfor...@vcu.edu - 804-828-4807
  Don't be a phishing victim - VCU and other reputable organizations
  will never use email to request that you reply with your password,
  social security number or confidential personal information. For more
  details visit http://infosecurity.vcu.edu/phishing.html
 
 
 
  From:
  Paul Fielding p...@fielding.ca
  To:
  ADSM-L@VM.MARIST.EDU
  Date:
  07/25/2011 08:50 AM
  Subject:
  [ADSM-L] Active logs taking 4 days to delete in 6.2 Sent by:
  ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 
 
 
  So, at one of my client sites I noticed that the Active log filesystem

Active logs taking 4 days to delete in 6.2

2011-07-25 Thread Paul Fielding
So, at one of my client sites I noticed that the Active log filesystem is
sitting at 82% full.  This is not normal for this TSM server.  Looking in
the filesystem I saw active logs going back four days.   Checking the actlog
shows that TSM db backups are still running properly every day, but just to
be safe I ran two db backups in succession.  No logs were removed.

I decided to keep an eye on it.  What I see happening is that each morning
when I look at it, there are still four days worth of logs, but the oldest
logs are moving forward by a day. ie.  when I looked on July 22, the oldest
log was July 18.  When I looked on July 23, the oldest log was July 19.
 Today, July 25, I see the oldest log is July 21.

This strikes me as a bit bizarre.  Anyone have any ideas?

regards,

Paul


Re: Active logs taking 4 days to delete in 6.2

2011-07-25 Thread Paul Fielding
Yeah, I am.  The thing that's wierd is the 4 day delay.  Active logs are
getting deleted, but they're waiting 4 days to do so.  And this is not how
the behavior has always been since going to 6.2.  I just noticed the change
one day.  Very bizarre...


On Mon, Jul 25, 2011 at 6:59 AM, Zoltan Forray/AC/VCU zfor...@vcu.eduwrote:

 Are you doing backup volhist as well?  IIRC, there was a discussion that
 you needed to do that as well to purge activity logs.  Plus it is a
 requirement to perform DB restores on 6.x servers.


 Zoltan Forray
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html



 From:
 Paul Fielding p...@fielding.ca
 To:
 ADSM-L@VM.MARIST.EDU
 Date:
 07/25/2011 08:50 AM
 Subject:
 [ADSM-L] Active logs taking 4 days to delete in 6.2
 Sent by:
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 So, at one of my client sites I noticed that the Active log filesystem is
 sitting at 82% full.  This is not normal for this TSM server.  Looking in
 the filesystem I saw active logs going back four days.   Checking the
 actlog
 shows that TSM db backups are still running properly every day, but just
 to
 be safe I ran two db backups in succession.  No logs were removed.

 I decided to keep an eye on it.  What I see happening is that each morning
 when I look at it, there are still four days worth of logs, but the oldest
 logs are moving forward by a day. ie.  when I looked on July 22, the
 oldest
 log was July 18.  When I looked on July 23, the oldest log was July 19.
  Today, July 25, I see the oldest log is July 21.

 This strikes me as a bit bizarre.  Anyone have any ideas?

 regards,

 Paul



Re: TS3200 tape library checkin question

2011-07-20 Thread Paul Fielding
checkin libvol [libraryname] search=bulk status=scratch checklabel=barcode

unless the tapes are brand new and never been electronically labeled before,
then try:

label libvol [libraryname] search=bulk checkin=scratch labelsource=barcode

regards,

Paul


On Wed, Jul 20, 2011 at 8:23 PM, Paul_Dudley pdud...@anl.com.au wrote:

 We have an IBM TS3200 tape library, which takes both LTO4  LTO3 tapes. The
 I/O cartridge has capacity to slot in 3 tapes at a time. However I am not
 sure what is the correct command to check in 3 scratch tapes at the one time
 and get the tape library to search the I/O cartridge for each of the 3
 scratch tapes. Can someone advise me on the correct syntax of the checkin
 command for this?





 Thanks  Regards

 Paul



 Paul Dudley

 Senior IT Systems Administrator

 ANL Container Line Pty Limited

 Email:  mailto:pdud...@anl.com.au pdud...@anl.com.au








 ANL DISCLAIMER

 This e-mail and any file attached is confidential, and intended solely to
 the named addressees. Any unauthorised dissemination or use is strictly
 prohibited. If you received this e-mail in error, please immediately notify
 the sender by return e-mail from your system. Please do not copy, use or
 make reference to it for any purpose, or disclose its contents to any
 person.



Re: Backup specific folder only

2011-07-18 Thread Paul Fielding
Indeed just using the EXCLUDE as I specified will grab the directory tree
still. It's a longer story, but the Reader's Digest version is that doing so
ensure's the TSM BA client GUI can drill down to any given directory to find
files you may have backed up.

You can't use EXCLUDE.DIR on the E:\ directory, otherwise you won't get
the Oracle directory either for the reasons I specified previously.

You could use the EXCLUDE.DIR on all the other directories in E: as
suggested by Gary, but then (also as he suggested) you'll have more
administrative overhead as you need to check periodically to ensure that all
the dirctories in the E: drive are still being excluded - if someone adds a
dir to the E: drive afterwards without telling you, it'll get backed up.

Andy's suggestion for doing a separate schedule that targets the directory
is actually pretty good, you'd get only the directory you want and don't
risk someone doing something on the server to inadvertently cause you to
start backing up unwanted data.  However then you're maintaining a separate
schedule (not really a big deal).

As far as fear of the DB growing, I would ask you this - do you really have
that many subdirectories in the Oracle dir that you're going to see any kind
of noticable db growth as a result?  Even if you have a hundred or a few
hundred other directories there, it is not going to impact your db in any
way that you could ever notice.  If you had thousands of subdirs in there,
then yes, I might be a bit concerned, though I'd be less concerned about the
size of the db growth and more  concerned about how long it takes to
traverse the directories unnecessarily.   In general, though, I've found
that it hasn't been any kind of a super negative impact.

I would also re-ask the question - are you really, really sure you don't
want to backup anything else on this server, especially the OS?  As I
mentioned previously, in most TSM installations, the size difference that
backing up the rest of the data/OS will make to the total storage is small,
and having the ability to restore the odd file here or there if someone
mucks up is worth it's weight in gold

Paul


On Mon, Jul 18, 2011 at 6:04 AM, Andrew Raibeck stor...@us.ibm.com wrote:

 Another solution is to use a customized schedule for this node that targets
 the desired directory:

 DEFINE SCHEDULE domain schedname OBJECTS=E:\Oracle\ OPTIONS=-SUBDIR=YES

 And keep the INCLUDE statement in dsm.opt:

 Include E:\Oracle\...\* 3mth_grp2_MC

 Best regards,

 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Product Development
 Level 3 Team Lead
 Internal Notes e-mail: Andrew Raibeck/Hartford/IBM@IBMUS
 Internet e-mail: stor...@us.ibm.com

 IBM Tivoli Storage Manager support web page:

 http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

 ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2011-07-18
 07:25:48:

  From: Lee, Gary D. g...@bsu.edu
  To: ADSM-L@vm.marist.edu
  Date: 2011-07-18 07:33
  Subject: Re: Backup specific folder only
  Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
 
  Try the following.
 
  Add
 
  Exclude.dir e:\dir1
  Exclude.dir e:\dir2
  . . .
 
  For each major tree on the e: drive.  This will keep tsm from
  traversing the trees at all.
  More administration, but the only thing I can think of.
  Exclude.dir is processed before standard excludes, therefore you
  cannot exclude.dir e:\ then include what you wanted.
 
  Hope this helps.
 
 
 
  Gary Lee
  Senior System Programmer
  Ball State University
  phone: 765-285-1310
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Gibin
  Sent: Monday, July 18, 2011 2:26 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: [ADSM-L] Backup specific folder only
 
  Thanks Paul ,Made the changes in the server dsm.opt
 
 
  DOMAIN E:
  Exclude E:\...\*
  Include E:\Oracle\...\* 3mth_grp2_MC
 
 
  Now only files/folders from E:\Oracle get backed up.But i noticed
  a problem when using the mentioned include/exclude , all the sub
  folders from all the other excluded folders get backed up as
  well.This i feel can cause unnecessary TSM DB growth .As each file/
  folder backed up consumes 4KB of TSM database space.
 
 
  Is there some way to avoid TSM backing up the sub folder structure
  from all the other excluded directories...??
 
  +--
  |This was sent by gibi...@gmail.com via Backup Central.
  |Forward SPAM to ab...@backupcentral.com.
  +--



Re: Backup specific folder only

2011-07-18 Thread Paul Fielding
Correction to my previous message - I asked if you really have that many
directories in the Oracle dir, when I meant to say do you really have that
many other directories on the E: drive?

Paul


On Mon, Jul 18, 2011 at 8:54 AM, Paul Fielding p...@fielding.ca wrote:

 Indeed just using the EXCLUDE as I specified will grab the directory tree
 still. It's a longer story, but the Reader's Digest version is that doing so
 ensure's the TSM BA client GUI can drill down to any given directory to find
 files you may have backed up.

 You can't use EXCLUDE.DIR on the E:\ directory, otherwise you won't get
 the Oracle directory either for the reasons I specified previously.

 You could use the EXCLUDE.DIR on all the other directories in E: as
 suggested by Gary, but then (also as he suggested) you'll have more
 administrative overhead as you need to check periodically to ensure that all
 the dirctories in the E: drive are still being excluded - if someone adds a
 dir to the E: drive afterwards without telling you, it'll get backed up.

 Andy's suggestion for doing a separate schedule that targets the directory
 is actually pretty good, you'd get only the directory you want and don't
 risk someone doing something on the server to inadvertently cause you to
 start backing up unwanted data.  However then you're maintaining a separate
 schedule (not really a big deal).

 As far as fear of the DB growing, I would ask you this - do you really have
 that many subdirectories in the Oracle dir that you're going to see any kind
 of noticable db growth as a result?  Even if you have a hundred or a few
 hundred other directories there, it is not going to impact your db in any
 way that you could ever notice.  If you had thousands of subdirs in there,
 then yes, I might be a bit concerned, though I'd be less concerned about the
 size of the db growth and more  concerned about how long it takes to
 traverse the directories unnecessarily.   In general, though, I've found
 that it hasn't been any kind of a super negative impact.

 I would also re-ask the question - are you really, really sure you don't
 want to backup anything else on this server, especially the OS?  As I
 mentioned previously, in most TSM installations, the size difference that
 backing up the rest of the data/OS will make to the total storage is small,
 and having the ability to restore the odd file here or there if someone
 mucks up is worth it's weight in gold

 Paul


 On Mon, Jul 18, 2011 at 6:04 AM, Andrew Raibeck stor...@us.ibm.comwrote:

 Another solution is to use a customized schedule for this node that
 targets
 the desired directory:

 DEFINE SCHEDULE domain schedname OBJECTS=E:\Oracle\ OPTIONS=-SUBDIR=YES

 And keep the INCLUDE statement in dsm.opt:

 Include E:\Oracle\...\* 3mth_grp2_MC

 Best regards,

 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Product Development
 Level 3 Team Lead
 Internal Notes e-mail: Andrew Raibeck/Hartford/IBM@IBMUS
 Internet e-mail: stor...@us.ibm.com

 IBM Tivoli Storage Manager support web page:

 http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

 ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2011-07-18
 07:25:48:

  From: Lee, Gary D. g...@bsu.edu
  To: ADSM-L@vm.marist.edu
  Date: 2011-07-18 07:33
  Subject: Re: Backup specific folder only
  Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
 
  Try the following.
 
  Add
 
  Exclude.dir e:\dir1
  Exclude.dir e:\dir2
  . . .
 
  For each major tree on the e: drive.  This will keep tsm from
  traversing the trees at all.
  More administration, but the only thing I can think of.
  Exclude.dir is processed before standard excludes, therefore you
  cannot exclude.dir e:\ then include what you wanted.
 
  Hope this helps.
 
 
 
  Gary Lee
  Senior System Programmer
  Ball State University
  phone: 765-285-1310
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
 Of
 Gibin
  Sent: Monday, July 18, 2011 2:26 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: [ADSM-L] Backup specific folder only
 
  Thanks Paul ,Made the changes in the server dsm.opt
 
 
  DOMAIN E:
  Exclude E:\...\*
  Include E:\Oracle\...\* 3mth_grp2_MC
 
 
  Now only files/folders from E:\Oracle get backed up.But i noticed
  a problem when using the mentioned include/exclude , all the sub
  folders from all the other excluded folders get backed up as
  well.This i feel can cause unnecessary TSM DB growth .As each file/
  folder backed up consumes 4KB of TSM database space.
 
 
  Is there some way to avoid TSM backing up the sub folder structure
  from all the other excluded directories...??
 
  +--
  |This was sent by gibi...@gmail.com via Backup Central.
  |Forward SPAM to ab...@backupcentral.com.
  +--





Re: Scalar 50 support TSM 6.2?

2011-07-18 Thread Paul Fielding
http://www-01.ibm.com/software/sysmgmt/products/support/IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html

and more specifically:

https://www-304.ibm.com/support/docview.wss?rs=663uid=swg21273206



On Mon, Jul 18, 2011 at 9:59 PM, lavan tsm-fo...@backupcentral.com wrote:

 Hi all

 Scalar 50 tape drives is supported for TSM 6.2 ?

 thanks in advance,
 velavan

 +--
 |This was sent by rvela...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



Re: Backup specific folder only

2011-07-17 Thread Paul Fielding
I would suggest approaching it a different way.  You have to keep in mind
that include/exclude lists are processed from the bottom up.  If you put an
exclude in the list, and put an include below it, the include will override
the exclude.  Additionally, since you're only interested in the E: drive,
you can use the DOMAIN statement to ensure that only that drive gets
touched.  The way I would accomplish what you want is:

DOMAIN E:
Exclude E:\...\*
Include  E:\Oracle\...\*  3mth_grp2_MC

Because the Domain statement only has the E: drive in it, that's the only
drive that will be touched.  Because the Include line is below the Exclude,
it gets processed first, so the Oracle directory will be backed up even
though the exclude line says to exclude everything.  This should accomplish
what you want.

Note that I didn't use Exclude.dir, this was intentional.  Exclude.dir tells
TSM not to traverse the directory, at all.  Because of this, if I had used
Exclude.dir E:\...\*, then the E: drive would never get traversed, and the
TSM client would never figure out that there was an Oracle directory there
to back up, even though you have the Include statement in place.   In
general I only like to use exclude.dir in a situation where you're
traversing a directory that has so many millions of files that it puts too
big a load on the client to finish in any reasonable amount of time.

All that being said, are you sure you want to only backup the E:\Oracle
directory?  I know lots of people who say I don't care about the operating
system, if the box craters, I'd rather rebuild it.  This is fine, but what
if you don't crater the whole box, but rather accidentally delete or corrupt
just a few files in the C: drive?  It would really suck to have to rebuild
the box because you can't restore those couple of files.   Some people argue
that backing up the C: drive adds unnecessary bulk to your tape storage.  I
say that the amount of space taken up by the OS in TSM is trivial compared
to the amount of data that generally gets backed up and will not make
any noticeable difference to the bottom line in most cases, therefore I'd
rather have it backed up and not restore it, than not have it backed up when
I need it

regards,

Paul


On Sun, Jul 17, 2011 at 7:59 AM, Gibin tsm-fo...@backupcentral.com wrote:

 On one of our server with C,E drives , i want to backup only the folder 
 Oracle in E drive.The E drive has the directory layout as follows:

 07/17/2011  04:37 PMDIR  54354365430O46est
 06/29/2010  02:31 PM 6,131,936 iis60rkt.exe
 07/17/2011  04:37 PMDIR  Old Oracle home
 07/17/2011  03:20 PMDIR  Oracle
 07/17/2011  04:37 PMDIR  Oracle1234
 06/29/2010  11:16 AMDIR  Sder
 06/29/2010  03:17 PMDIR  SW_DVD5

 I have put include-exclude entries in my dsm.opt file as:

 Include  E:\Oracle\...\*  3mth_grp2_MC
 EXCLUDE.DIR [a-df-z]:\*
 EXCLUDE.DIR E:\[a-np-z]*


 When i ran backup , it backed up all directories(+files) which were
 beginning with O like Old Oracle home,Oracle,54354365430O46est,Oracle123

 Please advise me how to make sure only the directory+files from  Oracle
 is backed up and  rest all folders/files get skipped.

 +--
 |This was sent by gibi...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



Re: NetApp NDMP backups to TSM server?

2011-07-15 Thread Paul Fielding
Yup.  Look in the TSM Admin Guide, Chapter 8 - Using NDMP

Go to Backing up and restoring NAS file servers using NDMP - Performing
NDMP filer to Tivoli Storage Manager server
backups

The short story - if you point the copygroup for the ndmp mgmt class to a
regular stgpool rather than one configured for NDMP san operations, it will
automatically attempt to use IP.  You can basically completely bypass all
the backup-over SAN bits.

Paul


On Fri, Jul 15, 2011 at 7:14 AM, Paul Zarnowski p...@cornell.edu wrote:

 I thought I read somewhere that you could configure NetApp servers to
 direct their NDMP backup data stream to a TSM server (over IP network)
 instead of to an attached tape drive.  I am having trouble finding this
 documented anywhere, however, and am beginning to doubt my memory.  Can
 anyone tell me definitively whether this is possible or not?
 Thanks.


 --
 Paul ZarnowskiPh: 607-255-4757
 Manager, Storage Services Fx: 607-255-8521
 719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu



Re: Nodes with different drive list under One Scheduler.

2011-07-13 Thread Paul Fielding
So, i apologize if i missed you saying this, but is there a particular
reason you want to schedule Selective Full backups for all these nodes,
rather than just doing incrementals?

Also, is there a reason you can't set up 10 schedules that are all scheduled
for the same time?

regards,

Paul


On Wed, Jul 13, 2011 at 3:36 AM, Lakshminarayanan, Rajesh 
rajesh.lakshminaraya...@dfs.com wrote:

 Hi Richard,

  I tried following options;

 Scenario 1:

Policy Domain Name: WORKFLOW
 Schedule Name: TEST_FULL_BKP
   Description:
Action: Selective
   Options: -subdir=yes
   Objects: *
  Priority: 5

 Result: only files under c$\Program Files\Tivoli\TSM\baclient\
 directory got backed up. DOMAIN ALL-LOCAL was specified in the dsm.opt
 client node.

 Scenario 2:

Policy Domain Name: WORKFLOW
 Schedule Name: TEST_FULL_BKP
   Description:  Incremental Backup Of WorkFlow Servers
Action: Selective
   Options: -subdir=yes
   Objects:

 Result: I am getting error message saying ANS1079E No file
 specification entered.



 Am I missing out anything.



 Regards,

 Rajesh Lakshminarayanan


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Richard Sims
 Sent: Tuesday, July 12, 2011 7:01 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Nodes with different drive list under One
 Scheduler.

 On Jul 12, 2011, at 2:04 AM, Lakshminarayanan, Rajesh wrote:

  Hi Richard,
 
 Thnx for your suggestion... Domain option is specified in my client

  dsm.opt file. But still my schedule ends  with failure message(RC-12)
  
 
 Should I need to remove the drive name specified in the object
  parameter in the scheduler when the domain option is specified in the
  dsm.opt file

 Yes; the Objects list will cause the DOMain statement specs to be
 disregarded, much like performing 'dsmc incremental a: b: c:'.  The
 point of the DOMain option is to allow each client to have a drives list
 which is specific to its environment, given that computer disk
 complements can vary so much.

 And: In your backups, be certain that you really want a Selective backup
 rather than Incremental, to avoid bloating your TSM server storage
 pools.

Richard Sims



Re: Nodes with different drive list under One Scheduler.

2011-07-11 Thread Paul Fielding
If your only constraint is that you need to run them at the same day and
time, is there any reason you can't set up 10 separate schedules but set all
10 schedules for the same day and time?



On Mon, Jul 11, 2011 at 4:41 AM, Lakshminarayanan, Rajesh 
rajesh.lakshminaraya...@dfs.com wrote:

 Hi Guys,

I have a scenario where I need to schedule a full backup for 10
 client nodes(windows server) under one schedule in the TSM server.  All
 the client nodes have different drive list in it.  If I group them under
 once scheduler with  drive list specified in the object parameter of the
 scheduler, I get backup failure messages from the client nodes as all
 the drives specified in the scheduler is not available in the client
 node.I don;t want to schedule 10 separate schedulers for each client
 node in the TSM server.I have constrains to run the backup on the
 same day and time.

   Can you some one help me with a work around with this..


 Scenario's example:

 Client Node 1 has C: D :  drives
 Client Node 2 has C: D : F: drives
 Client Node 3 has C: D : E: drives
 Client Node 4 has C: D : S: drives
 :
 :
 :
 :
 Client node 10 has C: F:S: drives

 Scheduler details

  Policy Domain Name: Domain
 Schedule Name: GRP1_FULL
   Description: Schedule To Backup windows Servers
Action: Selective
   Options: -subdir=yes -domain=All-local
   Objects: c: d: e: f: g:h:J;



 Regards,

 Rajesh



Re: Linux Server upgrade - 6.1.4.3 to 6.1.5.0

2011-07-03 Thread Paul Fielding
I've performed a couple of 6.1 to 6.1 upgrades successfully.  Perhaps
there was just some vaguarity with that particular installation?

Sent from my iPhone

On Jul 3, 2011, at 8:39 AM, Zoltan Forray zfor...@vcu.edu wrote:

 Back a while ago I did a virgin install of 6.1.0, then upgraded to 6.1.3.0. 
 Started the server...did a few thingsshut down.tried to install 
 6.2.failed somehow (don't remember what ...it has been a year).

 Wouldn't start back up.couldn't uninstallended up nuking the box to 
 completely remove DB2

 Remco Post r.p...@plcs.nl wrote:

 On 3 jul 2011, at 14:32, Zoltan Forray/AC/VCU wrote:


 I wonder if/when there will be a 6.1-6.2 upgrade path.


 I wasn't aware that this was impossible. I've performed a 6.1 to 6.2 server 
 upgrade a few times now on an empty database without any problems. What are 
 the issues?

 --
 Met vriendelijke groeten/Kind Regards,

 Remco Post
 r.p...@plcs.nl
 +31 6 248 21 622


Re: volumes associated with a node

2011-06-29 Thread Paul Fielding
Hmmm When I do show volumeusage NODENAME, I get no result, regardless
of node

Paul


On Wed, Jun 29, 2011 at 2:49 PM, Shawn Drew 
shawn.d...@americas.bnpparibas.com wrote:

 I would go with the volumeusage table instead of contents if you just want
 the tapes for a node.
 select distinct(volume_name) from volumeusage where node_name='NODENAME'
 (that will show copypool volumes also)

 Alternatively, there is a show command:
 show volumeusage NODENAME


 Regards,
 Shawn
 
 Shawn Drew





 Internet
 evergreen.sa...@gmail.com

 Sent by: ADSM-L@VM.MARIST.EDU
 06/29/2011 09:09 AM
 Please respond to
 ADSM-L@VM.MARIST.EDU


 To
 ADSM-L
 cc

 Subject
 Re: [ADSM-L] volumes associated with a node






 select volume_name from contents where node_name like '%nodename%' and
 FILESPACE_NAME like '%Filespacename%' and FILE_NAME like '%Filename%'

 On Wed, Jun 29, 2011 at 3:48 PM, molin gregory
 gregory.mo...@afnor.orgwrote:

  Hello,
 
  Try : q nodedata NODENAME stg=
 
  Cordialement,
  Grégory Molin
  Tel : 0141628162
  gregory.mo...@afnor.org
 
  -Message d'origine-
  De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de
  Tim Brown
  Envoyé : mercredi 29 juin 2011 14:40
  À : ADSM-L@VM.MARIST.EDU
  Objet : [ADSM-L] volumes associated with a node
 
  Is there a select statement that will list all tape volumes that have
 files
  for
 
  a given TSM node.
 
 
 
  Thanks,
 
 
 
  Tim Brown
  Systems Specialist - Project Leader
  Central Hudson Gas  Electric
  284 South Ave
  Poughkeepsie, NY 12601
  Email: tbr...@cenhud.com mailto:tbr...@cenhud.com
  Phone: 845-486-5643
  Fax: 845-486-5921
  Cell: 845-235-4255
 
 
 
 
  This message contains confidential information and is only for the
 intended
  recipient. If the reader of this message is not the intended recipient,
 or
  an employee or agent responsible for delivering this message to the
 intended
  recipient, please notify the sender immediately by replying to this note
 and
  deleting all copies and attachments.
 
  ATTENTION.
 
  Ce message et les pièces jointes sont confidentiels et établis à
  l'attention exclusive de leur destinataire (aux adresses spécifiques
  auxquelles il a été adressé). Si vous n'êtes pas le destinataire de ce
  message, vous devez immédiatement en avertir l'expéditeur et supprimer
 ce
  message et les pièces jointes de votre système.
 
  This message and any attachments are confidential and intended to be
  received only by the addressee. If you are not the intended recipient,
  please notify immediately the sender by reply and delete the message and
 any
  attachments from your system. 
 



 --
 Thanks  Regards,
 Sarav
 +974-3344-1538

 There are no secrets to success. It is the result of preparation, hard
 work
 and learning from failure - Colin Powell



 This message and any attachments (the message) is intended solely for
 the addressees and is confidential. If you receive this message in error,
 please delete it and immediately notify the sender. Any use not in accord
 with its purpose, any dissemination or disclosure, either whole or partial,
 is prohibited except formal approval. The internet can not guarantee the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
 not therefore be liable for the message if modified. Please note that
 certain
 functions and services for BNP Paribas may be performed by BNP Paribas RCC,
 Inc.



Re: Two HSM quesions

2011-06-23 Thread Paul Fielding
On Thu, Jun 23, 2011 at 7:07 AM, Stefan Folkerts stefan.folke...@gmail.com
 wrote:

 The trick is that the backup in TSM IS the migrated version of the file, so
 when you migrate you backup.


It's been awhile since I've used HSM on Unix, but (unless something has
changed) I disagree with this statement.

In Unix HSM (which is the only one I've used), migrated files and backed up
files are not the same thing.  a migrated file is still at risk of loss.
 Migrated files use different rules than backed up files, and as such need a
separate backup in order to ensure you're protected.

The most prudent way to do this safely is to ensure that you have TSM set to
require a file be backed up prior to being migrated (set MIGREQUIRESBACKUP
in mgmt class that you're using for HSM).   This defines that a file will
not be migrated until it has previously been backed up.  HSM data is
independent of backup data, can be kept in differing storage pools if
desired, and do indeed take up their own space.

As long as you have this set, and run regular incremental backups, you'll
have a good backup of the data in the event the migrated file gets
deleted/corrupted/lost.   You won't need to worry about the migrated state
of your files - TSM will keep the full file backed up, regardless if it is
resident or migrated.

regards,

Paul

This is different in Windows where a backup and a migrated file are two
 different things because HSM for Windows is archive based and therefor not
 as nice.
 HSM for Unix rocks!

 On Thu, Jun 23, 2011 at 2:05 PM, Mehdi Salehi ezzo...@gmail.com wrote:

  Hi,
  I have two HSM questions:
  - Is HSM for Unix included in TSM b/a client?
  - What is the way to backup an HSM-managed filesystem? The illusion for
 me
  is that HSM data is not fixed, some of it might be on tape today, but
 based
  on the configurations and actually the need for data, at another time the
  contents of the filesystem would be totally different. How to
  protect/backup
  the data?
 
  Thank you,
  Mehdi
 



Re: Performance in 6.2.2.x with Win2k8 64bit?

2011-06-16 Thread Paul Fielding
I have a client who went from v5.5 32b to 6.2.2.2 64b on new hardware.
 So far they're running noticeably faster

Paul

Sent from my iPhone

On Jun 15, 2011, at 10:36 PM, Prather, Wanda wprat...@icfi.com wrote:

 Can anyone who has converted from V5 to V6.2.2.0 or V6.2.2.2 on Win2K8 64b 
 confirm for me that you are running OK?

 Have a customer who did the conversion from V5 on Win2K3 32b  to V6.2.2.2 on 
 Win2K8 64b and new server hardware.  Performance is measurably degraded.  
 Background processes are slower by 10-18%; client backups are showing 
 commwait increased by an order of magnitude.

 I'm suspecting it's in the hardware config somewhere, but can't find it.  If 
 anybody can confirm upgrade success on Win2K8 64b I would appreciate it 
 before I go upgrade anyone to this level.   :)

 Thanks!

 Wanda Prather  |  Senior Technical Specialist  | 
 wprat...@icfi.commailto:wprat...@icfi.com  |  www.jasi.comwww.jasi.com%20
 ICF Jacob  Sundstrom  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 | 
 410.539.1135


Re: TSM Suite for Unified Recovery

2011-06-10 Thread Paul Fielding
I haven't seen the pricing structure, but I suspect that containing costs
completely depends on your site.   Since they appear to be basing it on a
capacity model rather than server/product model, I would suggest that:

- if you have a large number/types of servers to backup, but aren't really
huge from a TB capacity perspective, you'll save money
- if you have a relatively small number of servers to back up and/or you
store a large amount of data on TSM, you'll spend money

I suspect that you'd have to model it out both ways for your site and see
what comes out of it...

Paul


On Fri, Jun 10, 2011 at 9:59 AM, David E Ehresman
deehr...@louisville.eduwrote:

 I haven't heard anyone talking about this recently announced offering.
 Is IBM finally responding to complaints about their pricing structure?
 Is it really likely to contain costs?

 http://www-01.ibm.com/common/ssi/rep_ca/1/897/ENUS211-201/ENUS211-201.PDF



Re: Tsm server possibly limiting backup performance

2011-05-27 Thread Paul Fielding
I think clarification is needed on mb/s vs. MB/s.

a 1gb (gigabit) (note lowercase) connection will certainly do 800 mb/s
(megabit/s).  it won't, however, do 800 MB/s (megabyte/s)

Paul


On Fri, May 27, 2011 at 8:09 AM, Richard Rhodes rrho...@firstenergycorp.com
 wrote:

 A little more explanation is going to be needed.  A 1GB ethernet is not
 going to give 800mb/s, let alone 200mb/s.  1GB ethernet is only going to
 provide one direction throughput of 110mb/s.  Do you mean a 10gb ethernet
 connection?





 From:   Lee, Gary D. g...@bsu.edu
 To: ADSM-L@VM.MARIST.EDU
 Date:   05/27/2011 09:35 AM
 Subject:Tsm server possibly limiting backup performance
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 Tsm server 6.2.2
 OS redhat enterprise linux 6.0
 Client tdp for exchange, sqlserver, and backup archive client

 Client connected via 1 gb ethernet connection.

 In my experience, I yhave never been able to se throughput of much more
 than 200 mb/s during a tsm backup, whether tdp or regular client.
 I wonder, is TSM limiting a single session's bandwidth so as to insure
 availability for other client sessions which might begin; i.e. prevent one
 backup stream from monopolizing the server's network connection?

 I have run ftp  and other tests to the server, and achieved throughput of
 around 800 mb/s over the same connection.



 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310





 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If
 the reader of this message is not the intended recipient or an
 agent responsible for delivering it to the intended recipient, you
 are hereby notified that you have received this document in error
 and that any review, dissemination, distribution, or copying of
 this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete
 the original message.



Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

2011-05-13 Thread Paul Fielding
Hi Eric,

Indeed there may be something broken, but you have to understand that from
IBM's perspective, they have fixed it - you go to TSM 6.2.  5.5 will
eventually be removed from support (whether we like it or not) and then
you'll be really stuck.

The reason I ask the DB sizes is on topic.   I would suggest that the
published formulas for how long the TSM upgrade takes are inaccurate.  I
have performed a number of TSM 5.5 to 6.2 upgrades, some on Windows, some on
AIX.

The smallest one had a 140GB database, it took 8.5h from start to finish.
The biggest one had a 350GB database, it took 26h from start to finish.

There are several factors that will impact how long your upgrade will take,
not just db size.  The horsepower of the TSM 6.2 box (whether upgrading in
place or upgrading to a new server), and TSM 6.2 disk layout are two big
ones.

The single best thing you can do, rather than assuming your upgrade will
take 3 days, is to test it.  Setup a test server, restore the 5.5 db, then
upgrade it.   If doing a network upgrade to new box (ideal situation), then
setup your new TSM 6.2 server, setup a test 5.5 server, restore the 5.5 db,
then do the network upgrade from the test box to your 6.2 server.  My
clients (understandably) want to have a realistic idea of how long the
upgrade will take, so I've insisted on doing a test upgrade on every upgrade
I've done so far.  In each case my production upgrade came to within 1-2h of
the timing that the test took.

I guess what I'm saying is that you're going to have some pain no matter
what you do, but the pain to upgrade may not be as bad as you think, and the
potential payoff is big

regards,

Paul


On Thu, May 12, 2011 at 9:05 AM, Loon, EJ van - SPLXO eric-van.l...@klm.com
 wrote:

 Hi Paul!
 I have 3 servers with 86, 100 and 129 gb databases. I haven't tried
 migrating, but I read a document somewhere which contained some figures
 about what performance one can expect from the migration process. At
 that time I roughly calculated 3 and a half days.
 Anyway, let's stay on the subject here: there is something broken in TSM
 (remember, I'm not the only one struggling with the recovery log size)
 and IBM should seriously look in to it and fix it. I never had any
 problems with the log when we were using 5.3!
 Kind regards,
 Eric van Loon
 KLM Royal Dutch Airlines

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Paul Fielding
 Sent: donderdag 12 mei 2011 15:53
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

 A couple of questions, Eric

 - how big is your DB
 - have you tried a test upgrade to an alternate box to see how long it
 will
 take?

 Indeed, I agree that you're going to have to do something, whether it's
 take
 the outage to go to V6 or start to migrate to another product.  However,
 if
 you are in a position to move to another product, then (worst case
 scenario), you're probably in a position to move to a clean V6 server
 without migrating.   The one thing you really can't do is stay where you
 are
 forever, at least not with support

 Paul

 On Thu, May 12, 2011 at 7:38 AM, David E Ehresman
 deehr...@louisville.eduwrote:

  Eric,
 
  Pointing out the obvious here, but if you really can't afford to move
  to TSM v6 then its time for you to start planning your move to
 something
  else.  Support for TSM 5 will be dropped sometime and I suspect the
 move
  from TSM to something else will take some planning.
 
  David
 
   Loon, EJ van - SPLXO eric-van.l...@klm.com 5/12/2011 8:56 AM
  
  Hi Steve!
  If it would be possible, I would have already done that, but migrating
  our servers from 5.5 to 6.2 would require multiple days of downtime.
  Simple not acceptable in our shop...
  Kind regards,
  Eric van Loon
  KLM Royal Dutch Airlines
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
  Of
  Paul Fielding
  Sent: donderdag 12 mei 2011 14:33
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code
 
  You could also go to TSM 6.2, which changes to DB2 and changes the way
  recovery logs are dealt with, including allowing for much, much more
  log
 
 
  On Thu, May 12, 2011 at 5:53 AM, Steve Roder s...@buffalo.edu wrote:
 
   What's pinning your log?  Do a show logpinned, and then look at what
   that client is doing.  If it is a TDP, they are known for pinning
  the
   log and causing these kinds of issues.
  
   We see this issue also, and it is nearly always caused by an Oracle
  TDP
   or an Exchange TDP backup.
  
   Thanks,
   Steve
  
  
Hi Paul!
   We are already running 5.5.5.2 and the log is still filling up,
  even
   after switching from rollforward to normal mode.
   Management currently is questioning whether TSM is the right
  product
  for
   the future. Although I'm a big fan of TSM for 15 years, I'm really
  in
   doubt too

Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

2011-05-12 Thread Paul Fielding
You could also go to TSM 6.2, which changes to DB2 and changes the way
recovery logs are dealt with, including allowing for much, much more log


On Thu, May 12, 2011 at 5:53 AM, Steve Roder s...@buffalo.edu wrote:

 What's pinning your log?  Do a show logpinned, and then look at what
 that client is doing.  If it is a TDP, they are known for pinning the
 log and causing these kinds of issues.

 We see this issue also, and it is nearly always caused by an Oracle TDP
 or an Exchange TDP backup.

 Thanks,
 Steve


  Hi Paul!
 We are already running 5.5.5.2 and the log is still filling up, even
 after switching from rollforward to normal mode.
 Management currently is questioning whether TSM is the right product for
 the future. Although I'm a big fan of TSM for 15 years, I'm really in
 doubt too...
 Kind regards,
 Eric van Loon
 KLM Royal Dutch Airlines

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Paul Fielding
 Sent: woensdag 11 mei 2011 20:05
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

 Hi folks, if this has already been commented on, my apologies, I hvaen't
 been following closely but just noticed this thread.

 We were experiencing log pinned issues after upgrading to 5.5.0.0 code
 at
 one of my client sites.  What we were finding was that, occasionally,
 I'd
 get up in the morning and look at the server to see it completely bogged
 down with a huge number of backups still attempting to run, but nothing
 was
 moving - the log was at 84% (over it's trigger), and it had been firing
 off
 db backups for the last 4h to no avail, the log wasn't getting low
 enough.

 A key symptom was that in the actlog we were seeing messages to the
 effect
 of Recovery log is at 84%, transactions will be delayed by 3ms (or
 something like that).

 In opening a ticket, it was found there was an issue in the 5.5.0.0 code
 where, when the recovery log gets too full, it would start delaying
 transactions in order to prevent the log from filling too quickly.
 However,
 the side effect was that the pinning transaction would also get delayed,
 causing a bit of a never-ending loop.  Transactions keep getting
 delayed,
 pinned transaction never gets to finish, and everything would just grind
 to
 a halt.  I would have to halt the server and restart in order to get
 things
 back to normal.

 IBM recognized this as an issue and recommended going to any 5.5.5.0
 level
 of code, where the problem was supposed to be fixed.  I installed
 5.5.5.2,
 and the problem has indeed gone away.   It was supposed to be fixed at
 5.5.5.0, though perhaps they didn't quite get it done at that code level
 as
 they hoped?  I'd try installing 5.5.5.2 and see what happens

 regards,

 Paul


 On Wed, May 11, 2011 at 9:03 AM, Loon, EJ van - SPLXO
 eric-van.l...@klm.com

 wrote:
 Hi Robert!
 Thanks you very much for your reply! Several others on this list
 reported this behavior and (as far as I know) three other users opened

 a

 PMR too. I hope they have more luck, because I'm stuck. Level 2 keeps

 on

 saying that the log keeps on growing because of slow running client
 sessions. Indeed I see slow running client sessions, but they are

 slowed

 down by the fact that TSM is delaying all transactions because the log
 is used for more that 80% during a large part of the backup window!

 Now

 they refuse to help me, unless I buy a Passport Advantage

 contract!!

 Kind regards,
 Eric van Loon
 KLM Royal Dutch Airlines

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf

 Of

 Robert Clark
 Sent: woensdag 11 mei 2011 16:05
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

 I believe we've been seeing this problem as well.

 One night in the busiest backup period, I issued a q actlog
 begint=now-00:30, and got no results back but an error.

 I started dsmadmc -console on that server, and could see that the
 console
 output was most of an hour behind. (And the console output was

 scrolling

 so fast that it could barely be read.)

 In that case, I think we determined that SQL LiteSpeed was set to some
 rediculously small transaction size, and this was causing way too many
 actlog entries.

 I think I also noted that the session number incremented by something
 like
 100,000 in an hour.

 Asking the users of SQL LiteSpeed to make some changes was enough to
 rememdy this problem, although we continue to fight with the logs
 getting
 full.

 Thanks,
 [RC]



 From:   Loon, EJ van - SPLXOeric-van.l...@klm.com
 To: ADSM-L@VM.MARIST.EDU
 Date:   05/11/2011 02:05 AM
 Subject:Re: [ADSM-L] TSM Recovery log is pinning since upgrade
 to
 5.5.5.0 code
 Sent by:ADSM: Dist Stor ManagerADSM-L@VM.MARIST.EDU



 Hi TSM-ers!
 Here is a small follow up on my PMR about the recovery log

 utilization.

 I'm at TSM level 2, still trying

Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

2011-05-12 Thread Paul Fielding
A couple of questions, Eric

- how big is your DB
- have you tried a test upgrade to an alternate box to see how long it will
take?

Indeed, I agree that you're going to have to do something, whether it's take
the outage to go to V6 or start to migrate to another product.  However, if
you are in a position to move to another product, then (worst case
scenario), you're probably in a position to move to a clean V6 server
without migrating.   The one thing you really can't do is stay where you are
forever, at least not with support

Paul

On Thu, May 12, 2011 at 7:38 AM, David E Ehresman
deehr...@louisville.eduwrote:

 Eric,

 Pointing out the obvious here, but if you really can't afford to move
 to TSM v6 then its time for you to start planning your move to something
 else.  Support for TSM 5 will be dropped sometime and I suspect the move
 from TSM to something else will take some planning.

 David

  Loon, EJ van - SPLXO eric-van.l...@klm.com 5/12/2011 8:56 AM
 
 Hi Steve!
 If it would be possible, I would have already done that, but migrating
 our servers from 5.5 to 6.2 would require multiple days of downtime.
 Simple not acceptable in our shop...
 Kind regards,
 Eric van Loon
 KLM Royal Dutch Airlines

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
 Of
 Paul Fielding
 Sent: donderdag 12 mei 2011 14:33
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

 You could also go to TSM 6.2, which changes to DB2 and changes the way
 recovery logs are dealt with, including allowing for much, much more
 log


 On Thu, May 12, 2011 at 5:53 AM, Steve Roder s...@buffalo.edu wrote:

  What's pinning your log?  Do a show logpinned, and then look at what
  that client is doing.  If it is a TDP, they are known for pinning
 the
  log and causing these kinds of issues.
 
  We see this issue also, and it is nearly always caused by an Oracle
 TDP
  or an Exchange TDP backup.
 
  Thanks,
  Steve
 
 
   Hi Paul!
  We are already running 5.5.5.2 and the log is still filling up,
 even
  after switching from rollforward to normal mode.
  Management currently is questioning whether TSM is the right
 product
 for
  the future. Although I'm a big fan of TSM for 15 years, I'm really
 in
  doubt too...
  Kind regards,
  Eric van Loon
  KLM Royal Dutch Airlines
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
 Behalf
 Of
  Paul Fielding
  Sent: woensdag 11 mei 2011 20:05
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0
 code
 
  Hi folks, if this has already been commented on, my apologies, I
 hvaen't
  been following closely but just noticed this thread.
 
  We were experiencing log pinned issues after upgrading to 5.5.0.0
 code
  at
  one of my client sites.  What we were finding was that,
 occasionally,
  I'd
  get up in the morning and look at the server to see it completely
 bogged
  down with a huge number of backups still attempting to run, but
 nothing
  was
  moving - the log was at 84% (over it's trigger), and it had been
 firing
  off
  db backups for the last 4h to no avail, the log wasn't getting low
  enough.
 
  A key symptom was that in the actlog we were seeing messages to the
  effect
  of Recovery log is at 84%, transactions will be delayed by 3ms
 (or
  something like that).
 
  In opening a ticket, it was found there was an issue in the 5.5.0.0
 code
  where, when the recovery log gets too full, it would start delaying
  transactions in order to prevent the log from filling too quickly.
  However,
  the side effect was that the pinning transaction would also get
 delayed,
  causing a bit of a never-ending loop.  Transactions keep getting
  delayed,
  pinned transaction never gets to finish, and everything would just
 grind
  to
  a halt.  I would have to halt the server and restart in order to
 get
  things
  back to normal.
 
  IBM recognized this as an issue and recommended going to any
 5.5.5.0
  level
  of code, where the problem was supposed to be fixed.  I installed
  5.5.5.2,
  and the problem has indeed gone away.   It was supposed to be fixed
 at
  5.5.5.0, though perhaps they didn't quite get it done at that code
 level
  as
  they hoped?  I'd try installing 5.5.5.2 and see what happens
 
  regards,
 
  Paul
 
 
  On Wed, May 11, 2011 at 9:03 AM, Loon, EJ van - SPLXO
  eric-van.l...@klm.com
 
  wrote:
  Hi Robert!
  Thanks you very much for your reply! Several others on this list
  reported this behavior and (as far as I know) three other users
 opened
 
  a
 
  PMR too. I hope they have more luck, because I'm stuck. Level 2
 keeps
 
  on
 
  saying that the log keeps on growing because of slow running
 client
  sessions. Indeed I see slow running client sessions, but they are
 
  slowed
 
  down by the fact that TSM is delaying all transactions because the
 log
  is used for more that 80% during a large

Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

2011-05-11 Thread Paul Fielding
Hi folks, if this has already been commented on, my apologies, I hvaen't
been following closely but just noticed this thread.

We were experiencing log pinned issues after upgrading to 5.5.0.0 code at
one of my client sites.  What we were finding was that, occasionally, I'd
get up in the morning and look at the server to see it completely bogged
down with a huge number of backups still attempting to run, but nothing was
moving - the log was at 84% (over it's trigger), and it had been firing off
db backups for the last 4h to no avail, the log wasn't getting low enough.

A key symptom was that in the actlog we were seeing messages to the effect
of Recovery log is at 84%, transactions will be delayed by 3ms (or
something like that).

In opening a ticket, it was found there was an issue in the 5.5.0.0 code
where, when the recovery log gets too full, it would start delaying
transactions in order to prevent the log from filling too quickly.  However,
the side effect was that the pinning transaction would also get delayed,
causing a bit of a never-ending loop.  Transactions keep getting delayed,
pinned transaction never gets to finish, and everything would just grind to
a halt.  I would have to halt the server and restart in order to get things
back to normal.

IBM recognized this as an issue and recommended going to any 5.5.5.0 level
of code, where the problem was supposed to be fixed.  I installed 5.5.5.2,
and the problem has indeed gone away.   It was supposed to be fixed at
5.5.5.0, though perhaps they didn't quite get it done at that code level as
they hoped?  I'd try installing 5.5.5.2 and see what happens

regards,

Paul


On Wed, May 11, 2011 at 9:03 AM, Loon, EJ van - SPLXO eric-van.l...@klm.com
 wrote:

 Hi Robert!
 Thanks you very much for your reply! Several others on this list
 reported this behavior and (as far as I know) three other users opened a
 PMR too. I hope they have more luck, because I'm stuck. Level 2 keeps on
 saying that the log keeps on growing because of slow running client
 sessions. Indeed I see slow running client sessions, but they are slowed
 down by the fact that TSM is delaying all transactions because the log
 is used for more that 80% during a large part of the backup window! Now
 they refuse to help me, unless I buy a Passport Advantage contract!!
 Kind regards,
 Eric van Loon
 KLM Royal Dutch Airlines

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Robert Clark
 Sent: woensdag 11 mei 2011 16:05
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

 I believe we've been seeing this problem as well.

 One night in the busiest backup period, I issued a q actlog
 begint=now-00:30, and got no results back but an error.

 I started dsmadmc -console on that server, and could see that the
 console
 output was most of an hour behind. (And the console output was scrolling
 so fast that it could barely be read.)

 In that case, I think we determined that SQL LiteSpeed was set to some
 rediculously small transaction size, and this was causing way too many
 actlog entries.

 I think I also noted that the session number incremented by something
 like
 100,000 in an hour.

 Asking the users of SQL LiteSpeed to make some changes was enough to
 rememdy this problem, although we continue to fight with the logs
 getting
 full.

 Thanks,
 [RC]



 From:   Loon, EJ van - SPLXO eric-van.l...@klm.com
 To: ADSM-L@VM.MARIST.EDU
 Date:   05/11/2011 02:05 AM
 Subject:Re: [ADSM-L] TSM Recovery log is pinning since upgrade
 to
 5.5.5.0 code
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 Hi TSM-ers!
 Here is a small follow up on my PMR about the recovery log utilization.
 I'm at TSM level 2, still trying to convince them that there is
 something broken in the TSM server code. To convince them, I have
 changed the logmode to normal on one of my servers. I created a graph
 (through TSMManager) which shows the recovery log utilization during
 last night's client backup window and is doesn't differ much from the
 night before, with logmode rollforward. When running in normal mode, TSM
 should only use the recovery log for uncommitted transactions, so
 utilization should be very low. My log is 12 Gb and the backuptrigger
 value (75%) was still hit twice!
 This clearly shows that there is something wrong with TSM, let's hope I
 can convince Level 2 too, so my case gets forwarded to the lab.
 I'll keep you guys posted!
 Kind regards,
 Eric van Loon
 KLM Royal Dutch Airlines
 
 For information, services and offers, please visit our web site:
 http://www.klm.com. This e-mail and any attachment may contain
 confidential and privileged material intended for the addressee only. If
 you are not the addressee, you are notified that no part of the e-mail
 or
 any attachment may be disclosed, copied or distributed, and that 

lbtest sometimes doesn't return barcodes

2011-05-09 Thread Paul Fielding
I have a wierd issue I'm hoping someone has seen before.   I have a 3584
tape library attached to an AIX box.  I have a bunch of perl scripts that
deal with tape automation.  Part of this process uses lbtest to grab the
current contents of the IO door.   However, I've had it happen a few times
where things stopped working, and when I investigate I find that lbtest has
suddenly stopped returning barcodes.   All of the elements show FULL or
EMPTY, but no barcodes are listed, and barcode_len = 0.

If I go to the library web gui, I see the barcodes.  And if I just leave
things alone, in 24h or so it seems to go back to normal.  Nothing I do to
the library (including rebooting it) seems to fix the problem immediately.

Very very wierd.  Anyone seen this before?

regards,

Paul


Re: lbtest sometimes doesn't return barcodes

2011-05-09 Thread Paul Fielding
Well, that's the strange part. I'm confident it's not a conflict - when
there is a conflict lbtest returns an error and I can check for that.  The
library is returning an inventory, providing all info about what is in each
slot *except* for the barcode :(

Paul

On Mon, May 9, 2011 at 4:26 PM, Marcel Anthonijsz mar...@anthonijsz.netwrote:

 Paul,

 No, I have not seen this before: I use IBM Atape and tapeutil -f /dev/smc0
 inventory without any problem.

 Make sure that TSM or another program is not using the library device at
 the
 same time or program around that.
 Maybe a previous perl script that hangs/waits until next timeout?

 Marcel

 2011/5/9 Paul Fielding p...@fielding.ca

  I have a wierd issue I'm hoping someone has seen before.   I have a 3584
  tape library attached to an AIX box.  I have a bunch of perl scripts that
  deal with tape automation.  Part of this process uses lbtest to grab the
  current contents of the IO door.   However, I've had it happen a few
 times
  where things stopped working, and when I investigate I find that lbtest
 has
  suddenly stopped returning barcodes.   All of the elements show FULL or
  EMPTY, but no barcodes are listed, and barcode_len = 0.
 
  If I go to the library web gui, I see the barcodes.  And if I just leave
  things alone, in 24h or so it seems to go back to normal.  Nothing I do
 to
  the library (including rebooting it) seems to fix the problem
 immediately.
 
  Very very wierd.  Anyone seen this before?
 
  regards,
 
  Paul
 



 --
 Kind Regards, Groetje,

 Marcel Anthonijsz
 T: +31(0)299-776768
 M:+31(0)6-2423 6522



Re: lbtest sometimes doesn't return barcodes

2011-05-09 Thread Paul Fielding
Side note - I thought I'd give tapeutil a go, but it does not appear to be
install on my box.  Running Atape 12.2.4.0.   Anyone know if tapeutil should
still be installed with this version of Atape, or if there's somewhere else
I could look for it?

regards,

Paul


On Mon, May 9, 2011 at 4:26 PM, Marcel Anthonijsz mar...@anthonijsz.netwrote:

 Paul,

 No, I have not seen this before: I use IBM Atape and tapeutil -f /dev/smc0
 inventory without any problem.

 Make sure that TSM or another program is not using the library device at
 the
 same time or program around that.
 Maybe a previous perl script that hangs/waits until next timeout?

 Marcel

 2011/5/9 Paul Fielding p...@fielding.ca

  I have a wierd issue I'm hoping someone has seen before.   I have a 3584
  tape library attached to an AIX box.  I have a bunch of perl scripts that
  deal with tape automation.  Part of this process uses lbtest to grab the
  current contents of the IO door.   However, I've had it happen a few
 times
  where things stopped working, and when I investigate I find that lbtest
 has
  suddenly stopped returning barcodes.   All of the elements show FULL or
  EMPTY, but no barcodes are listed, and barcode_len = 0.
 
  If I go to the library web gui, I see the barcodes.  And if I just leave
  things alone, in 24h or so it seems to go back to normal.  Nothing I do
 to
  the library (including rebooting it) seems to fix the problem
 immediately.
 
  Very very wierd.  Anyone seen this before?
 
  regards,
 
  Paul
 



 --
 Kind Regards, Groetje,

 Marcel Anthonijsz
 T: +31(0)299-776768
 M:+31(0)6-2423 6522



Re: Calling all IBMtape Packrats

2011-05-02 Thread Paul Fielding
Hi Joerg,

I hope you're doing well.  I don't think I got the archive.  Did you email
me a link?

regards,

Paul


On Fri, Apr 29, 2011 at 6:31 PM, Joerg Pohlmann jpohlm...@shaw.ca wrote:

 Paul, I have sent you an old archive (Jan 2010) from before the change to
 Fix Central.

 Joerg Pohlmann

 - Original Message -
 From: Paul Fielding p...@fielding.ca
 Date: Friday, April 29, 2011 2:48 pm
 Subject: [ADSM-L] Calling all IBMtape Packrats
 To: ADSM-L@VM.MARIST.EDU

  Hi folks,
 
  I have someone who has an (out of maintenance) 3592-J1A (FC)
  Magstar tape
  drive with a need to attach it to either Windows 2000 or Windows
  XP.   The
  currently available IBMtape drivers only support 2003 and 2008.
 
  However, I recall from experience that IBMtape used to be
  supported on 2000,
  and some online digging suggests that at one time it was
  supported on XP.
  Normally I would have just gone to the boulder FTP site and
  downloaded older
  drivers.  However it appears that recently that's all come
  down in favour of
  fix central.  :(
 
  SOOOooo knowing that at one time in my life I used to
  packrat this sort
  of thing, I'm wondering if there's anyone out there who may have
  stowed away
  somewhere older IBMtape drivers that support 2000 or XP?
  :)  If so, any
  chance you're willing to share?
 
  regards,
 
  Paul
 



Calling all IBMtape Packrats

2011-04-29 Thread Paul Fielding
Hi folks,

I have someone who has an (out of maintenance) 3592-J1A (FC) Magstar tape
drive with a need to attach it to either Windows 2000 or Windows XP.   The
currently available IBMtape drivers only support 2003 and 2008.

However, I recall from experience that IBMtape used to be supported on 2000,
and some online digging suggests that at one time it was supported on XP.
Normally I would have just gone to the boulder FTP site and downloaded older
drivers.  However it appears that recently that's all come down in favour of
fix central.  :(

SOOOooo knowing that at one time in my life I used to packrat this sort
of thing, I'm wondering if there's anyone out there who may have stowed away
somewhere older IBMtape drivers that support 2000 or XP?  :)  If so, any
chance you're willing to share?

regards,

Paul


Re: TSM Express - need to change default policy which is 14 days

2007-02-20 Thread Paul Fielding

Hi Andrew,

I know this is a delayed response, just now going through the list to find
info on this question.

Frankly, I think IBM is making a critical mistake in not allowing TSM
Express to change the number of days that backups are kept for.   Out of all
the things that could be chosen to 'leave out' of TSM Express in the name of
simplicity, this is the wrong one to do so with.

While it probably isn't wise to give people the full range of retention
policies available to full TSM users, they should, at a minimum, be able to
say they want to keep their data for 7, 14, 21, days etc.  or something
along those lines.

I have several clients for whom TSM Express would have otherwise been an
excellent choice of backup product who have chosen to go with other products
(such as Backup Exec) for the simple reason that they can't control how long
their backups are being kept for.  The idea that a 'simple' TSM precludes
being able to answer the simple question of How long do i keep my backups
for is extremely shortsighted on the part of the developers (Or marketing
folk if they made the decision).

In my initial testing of TSM Express, it appears to have promise, and with a
few improvements would be an excellent product for small businesses who
can't afford to go to Enterprise level backups.  I cannot, however, give it
a wholesale recommendation to my clients without the ability to define how
long backups are kept for.

Don't quote me on this, but I'd be very surprised if there is another backup
product in existence which doesn't have this feature. (At least not one that
has any foot in the marketplace).

I notice that there seems to be very little discussion on TSM Express going
on.   That's unfortunate as it could fill an important niche.   I'm hoping
that IBM will listen to the few cries for help out there that we can make to
give us a small business backup solution that they can both afford and use
effectively.

regards,

Paul Fielding
Anisoft Group Inc.


-- Original Message --

Subject:  Re: TSM Express - need to change default policy which is 14 days
From:  Andrew Raibeck storman AT US.IBM DOT COM
To:  ADSM-L AT VM.MARIST DOT EDU
Date:  Sun, 18 Jun 2006 08:31:27 -0600

Wira,

The policy settings in TSM Express cannot be changed. This is one of the
trade-offs that we made in creating an easy TSM.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: storman AT us.ibm DOT com

IBM Tivoli Storage Manager support web page:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.


Vista support on TSM Express?

2007-02-20 Thread Paul Fielding
Has anyone heard anything about Vista support on TSM Express?  Coming down the 
pipe?

regards,

Paul


Re: Upgrade 5.1 to 5.3 on new server

2005-08-08 Thread Paul Fielding

There's several ways you could do this migration, but the way I would do it
is:

1. upgrade the old server to TSM 5.3, after backing up the DB of course.  I
wouldn't bother putting the ISC on the old box at this time, it's not needed
to get TSM up to 5.3.  Note - The last upgrade of TSM from 5.1 to 5.3 on AIX
I performed took upwards of 8 hours for a 40GB database - lots of database
changes that it needs to deal with between 5.1 and 5.3.

2. Export the database

3. install 5.3 (and the ISC) on the new server and import the database.
You could do a DB backup/restore, but the export/import will give you a
nicer, cleaner, friendlier database at the end of the day.

You could move to the new server on 5.1 and *then* upgrade TSM to 5.3
afterwards, but then you have the whole ugliness of unistalling/installing
new code, etc.   Personally, I like doing it as clean on the new server as
possible

later,

Paul

- Original Message -
From: Dirk Kastens [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Monday, August 08, 2005 9:05 AM
Subject: [ADSM-L] Upgrade 5.1 to 5.3 on new server



Hi,

we're running TSM 5.1 on AIX 5.1. We bought a new server
with AIX 5.3 where we want to install TSM 5.3. What is the
best way to migrate the data to the new server?
Would it be possible to import the 5.1 database and
server export files into the new server? Or do we
first have to upgrade the old server to TSM 5.3,
backup the database and then restore it to the new
server?
--
Regards,

Dirk Kastens
Universitaet Osnabrueck, Rechenzentrum (Computer Center)
Albrechtstr. 28, 49069 Osnabrueck, Germany
Tel.: +49-541-969-2347, FAX: -2470



Re: 5.1.7 on NT skipping drives

2005-06-30 Thread Paul Fielding

This was indeed the problem.  The D: drive did not give SYSTEM access to the
drive, so we saw no errors, no comment on success or failure to  backup D:,
etc.  As soon as we added SYSTEM to the permissions for the drive,
everything worked fine.

Thanks everyone...

Paul

- Original Message -
From: William Jean [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, June 29, 2005 10:58 AM
Subject: Re: [ADSM-L] 5.1.7 on NT skipping drives



Does the service run as the system account?  If so make sure that it has
access to the drive at the root.  If the service account running the TSM
scheduler service does not have access to the root of the drive it will
not be added to the domain list and you will not get any errors.



From: ADSM: Dist Stor Manager on behalf of Paul Fielding
Sent: Wed 6/29/2005 12:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] 5.1.7 on NT skipping drives



Hi everyone,
After doing a few searches and not finding an answer to this I figured I'd
see if anyone else has encountered this.

Running Windows 5.1.7client (the last supported client for NT) on NT 4
(sp6 I think).

Two drives, C: and D:

During a scheduled incremental, the C: drive gets backed up, system
objects get backed up, but D: drive doesn't get touched.

No error messages, nuttin.  It's as if the drive isn't there.

There's no Domain line in the dsm.opt file, no excludes.

I can go into the GUI and see the drive, and manually backup the drive.

Same client on other similarly configured NT systems at same site seem to
work just fine.

any thoughts?

Paul

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__



5.1.7 on NT skipping drives

2005-06-29 Thread Paul Fielding
Hi everyone,
After doing a few searches and not finding an answer to this I figured I'd see 
if anyone else has encountered this.

Running Windows 5.1.7client (the last supported client for NT) on NT 4 (sp6 I 
think).

Two drives, C: and D:

During a scheduled incremental, the C: drive gets backed up, system objects get 
backed up, but D: drive doesn't get touched.

No error messages, nuttin.  It's as if the drive isn't there.

There's no Domain line in the dsm.opt file, no excludes.

I can go into the GUI and see the drive, and manually backup the drive.

Same client on other similarly configured NT systems at same site seem to work 
just fine.

any thoughts?

Paul


Re: LTO2 unreliability

2005-06-09 Thread Paul Fielding

I had a client once that was experiencing high enough errors with their
tapes and drives (3494-3590E drives) that Imation actually asked to take a
few tapes for testing.   They came back saying that the tapes had high
amounts of a black powder on the tape that appeared to be printer toner.
Upon looking at the computer floor again, it turns out there was a laser
printer in the room that happened to be sitting partially over a perforated
floor tile and was gettng blown on constantly by the AC.   Printer was moved
to another location, and errors were largely eliminated.

I'd take another good look around the room to make sure there's nothing else
that could be causing environmental contamination

regards,

Paul

- Original Message -
From: Zoltan Forray/AC/VCU [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, June 01, 2005 11:12 AM
Subject: Re: [ADSM-L] LTO2 unreliability



Environment is not an issue.  This is in a *REAL* computer room (raised
floors, UPS/conditioned electricity, industrial-strength cooling).



Kauffman, Tom [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
06/01/2005 01:00 PM
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] LTO2 unreliability






Zoltan --

I have an IBM 3584 with 10 LTO2 drives, installed last August. So far,
I've had one service call -- one of the drives swallowed a tape and
wouldn't let go of it. If I remember correctly, we got the tape out but
replaced the drive just as a precaution.

I pump something over 3 TB of data through the library daily, most to
LTO2 tapes. All my offsite copies are written to LTO1 tapes (with the
LTO2 drives), and runs to about 1.6 TB daily.

Could you be having an environmental problem that's contributing to the
failures?

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Wednesday, June 01, 2005 10:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: LTO2 unreliability

Are all LTO2 drives as unreliable as I have been experiencing ?

We installed our first 3583 with 2-LTO2 drives, about a year ago. Of the
two drives, both have been replaced at least one. I just replace one of
the replacements, again (wouldn't power up). The first replacement was
after about a month of usage !

We just upgraded our configuration by adding 2-drives to the existing
library and purchasing another 3583 with 4-drives.  These have now been
in
use for about 2-weeks.

Of the 2-new drives added to the old library, one of them is already
showing signs of problems (shows A! on the panel, which the book says
drive hardware fault).

Of my 9+ years of experience with 3590 (B and E) drives, I have never
seen
so many failures. Especially considering the amount of tapes that have
been fed through the 3590 drives (the 4-E drives run through over
200-mounts a day).

Is this normal ?

Also, no I do not push what I consider a lot of data through these
drives
(~ 1TB per week).

What kind of experiences do other folks have with IBM 3580 LTO2 drives,
when it comes to reliability ?



Re: TSM/Veritas on the same library

2005-05-07 Thread Paul Fielding
There's no reason you can't share the library.   The 3584 supports
partitioning.  Unlike in your 3494 where the Library Manager controls what
tapes go to what host via category codes, in the 3584 you partition the
library into multiple 'virtual' tape libraries, so some slots and drives are
visible only to one host, and other slots and drives are visible only to the
other host.   The virtial library is presented to the host system as a
single library with only the number of slots that are available in that
partition, with only the drives that are available in that partition, so the
backup software doesn't need to understand anything about the competing
product.
The only place you have to be careful is when adding/removing tapes from the
library.  Make sure you're only adding/removing tapes from one server at a
time to ensure you don't inadvertently give Veritas tapes to TSM and vice
versa, etc.
Shouldn't be a problem, though...
Paul
- Original Message -
From: Gill, Geoffrey L. [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Friday, May 06, 2005 7:29 PM
Subject: [ADSM-L] TSM/Veritas on the same library

Since there was only one response I think I need to give a bit of info to
help with why I'm asking the question.

We have TSM and Veritas here. Not something I wanted but its here.
Currently
each reside on their own libraries mainly because one is LTO2(Veritas) and
the other a 3494(TSM). I'd like to start moving TSM LTO this year. We are
in
the midst of standing up a DR site and I'm trying to see how we can bundle
this together. So instead of standing up 2 libraries I'd like to stand up
one large enough to handle both the Veritas and TSM backups. I'd like to
add
on to what we have here if they could live together nicely.

The 3494 we have is currently shared by the mainframe and 2 TSM servers.
We
already know we have to bring up a 3494 at the DR site for the
mainframe/TSM
and another for Veritas. I would like however to send part of the 3494 we
have now to handle portions of the TSM backups along with getting hardware
for the IBM. If we stood up an LTO library here to start migrating off the
3494 that is doable. I would then need LTO at the DR site for TSM but
would
like to see if a single large enough unit would suffice.

So my original question is still, can a LTO library be shared by both
systems provided they each have their own dedicated drives? Has anyone
done
this or think it can be done?

Any information, be it in the form of links to articles or direct
knowledge
would be greatly appreciated.

Thanks,
Geoff


Re: Top 10 Tips for Improving Tivoli Storage Manager Performance

2005-04-28 Thread Paul Fielding
I get the same thing as well.
Unfortuantely, I don't particularly want to delete my cookies - I've got a
few useful ones tied in right now...
ah well...
Paul
- Original Message -
From: Iain Barnetson [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, April 27, 2005 8:32 AM
Subject: Re: [ADSM-L] Top 10 Tips for Improving Tivoli Storage Manager
Performance

Dave,
I'd the same problem, but closed the browser, deleted cookies, etc and
re-tried and got in
Regards,
Iain Barnetson
IT Systems Administrator
UKN Infrastructure Operations
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: 27 April 2005 14:29
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Top 10 Tips for Improving Tivoli Storage Manager
Performance
Hi, Dave -
Well, I can't speak for IBM Web page processing, but it should work.
I've seen it sometimes be flakey, but not recently. Possibly try from a
fresh browser session. If still nothing, check the What if I can't sign
in with my current IBM ID? at
https://www.ibm.com:443/account/profile/us?page=regfaqhelp , which is
the Help and FAQ page within the registration area. Beyond that, use
the Contact link at the bottom of the page to send email to have someone
see what the problem is.
   Richard Sims
On Apr 27, 2005, at 9:15 AM, Dave Zarnoch wrote:
Rich,
Not to be a pain...
After I registered, everytime I try to view this page I get redirected

to Get Access page.
Something I'm missing?



5.3.1 Admin Center Download?

2005-04-02 Thread Paul Fielding
The readme is there, but no download.

And the bit in the readme that lists fixed APARs is blank.

:(

Since most of the things I was looking for in 5.3.1 is hiding in the ISC rather 
than TSM, I guess I'm still waiting...


later,

Paul


Re: DIRMC - Are copypool reclamation performance issues resolved or not.

2005-03-19 Thread Paul Fielding
I'd be interested in more discussion on this point.   My original
understanding was actually a bit different that that.  The impression I had
was that originally directory tree structures were restored before any files
happened, period. Following that, files would be restored.  Net result -
tapes might get mounted twice.
Is my understanding incorrect? (could well be).  If this behavior has indeed
been fixed so that directories are restored as they are hit on the tape
(with a pre-created non-ACLed directory being created first) then it would
indeed make sense that a DIRMC pool is no longer needed.
Is there any documentation on this somewhere I can reference?
regards,
Paul
- Original Message -
From: TSM_User [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Thursday, March 17, 2005 3:54 PM
Subject: Re: [ADSM-L] DIRMC - Are copypool reclamation performance issues
resolved or not.

If V5.3 in fact only writes in larger blocks in the smaller directories
may take up more space that required.
Still, that issue aside you should no longer need to have a DIRMC pool. At
one time there was a feature (or call it a bug) where every directory had
to be restored as it came up which would cause many more mounts of tape
drives.  For some time now a restore create a directory (without ACL's) so
that the restore can continue. Then when the directory itself is hit it
will simply restore over top of the directory that was created.  This will
ensure each tape is still only ready once.  True, directories are like
small files and just like small files restoring from disk would be faster
but the bug that used to exist has long since been fixed.
Further as people implement file device class storage pools and other disk
only solutions like VTL's I don't see the need for seperating the
directories into a seperate pool.
Kyle
Rushforth, Tim [EMAIL PROTECTED] wrote:
What in 5.3 warrants new consideration?
The reason we implemented DIRMC is so that when a user restores a file(s)
there are not extra tape mounts to restore the directories We ran into
this on multiple occasions, even when all files were on disk, tape mounts
would occur because the directories were on tape.
Thanks,
Tim Rusforth
City of Winnipeg
-Original Message-
From: TSM_User [mailto:[EMAIL PROTECTED]
Sent: Wed 3/16/2005 6:48 PM
To: ADSM-L@VM.MARIST.EDU
Cc:
Subject: Re: DIRMC - Are copypool reclamation performance issues resolved
or not.

It is fixed but the reason there have been suggestions to use a file type
device class is because disk pools unline sequential pools are scanned
from begining to end for every storage pool backup. I have had some
customers that have millions of directories in their DIRMC pool. Even when
none change they backup runs from hours on that pool. With a file type
device class only the new volumes would be backed up resulting in a much
faster backup. Now all that being said this new feature in V5.3 warrents
new consideration. My new consideration is to stop using DIRMC pools as
the reason they were created in the first place has also long been fixed.
Kyle
Thorneycroft, Doug
wrote:
OK, after spending a large portion of my day reviewing adsm-l post going
back to
2000, I'm still not sure. Does anyone know if there is still a performance
problem
running reclamation on a DIRMC random access disk pool?
I came across one post that said it was supposedly fixed, but recommended
using
a file type diskpool to be safe.
-
Do you Yahoo!?
Yahoo! Small Business - Try our new resources site!

-
Do you Yahoo!?
Yahoo! Small Business - Try our new resources site!


Re: DIRMC - Are copypool reclamation performance issues resolved or not.

2005-03-19 Thread Paul Fielding
Hi Richard,
I took a look through the Quickfacts (something I should have done long
ago).  It does indeed suggest that surrogate directories are created and the
real directories are restored as they are hit.
Has anyone really observed this to be genuinely true?  I have in the past
observed the double-tape-mount theory, and though I understand it is
supposedly fixed, I haven't heard anyone say I have seen it, I know it
works, you no longer need to keep a dirmc diskpool.
Of course, if it is indeed working as designed now, it doesn't resolve the
other dirmc issues currently being discussed in this thread.
Is there anyone on the list who has in recent history decided to ditch using
a dirmc diskpool altogether and done so with success on the restore side?
regards,
Paul
- Original Message -
From: Richard Sims [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Saturday, March 19, 2005 4:44 AM
Subject: Re: [ADSM-L] DIRMC - Are copypool reclamation performance issues
resolved or not.

Paul -
This generally falls under the TSM term Restore Order processing. We've
discussed it on the List before. I have an entry on it in ADSM
QuickFacts which you can refer to as a preliminary to further pursuit
in IBM doc.
  Richard Simshttp://people.bu.edu/rbs
On Mar 19, 2005, at 3:06 AM, Paul Fielding wrote:
I'd be interested in more discussion on this point.   My original
understanding was actually a bit different that that.  The impression
I had
was that originally directory tree structures were restored before any
files
happened, period. Following that, files would be restored.  Net result
-
tapes might get mounted twice.
Is my understanding incorrect? (could well be).  If this behavior has
indeed
been fixed so that directories are restored as they are hit on the tape
(with a pre-created non-ACLed directory being created first) then it
would
indeed make sense that a DIRMC pool is no longer needed.
Is there any documentation on this somewhere I can reference?



Re: DIRMC - Are copypool reclamation performance issues resolved or not.

2005-03-16 Thread Paul Fielding
- Original Message -
in a much faster backup.  Now all that being said this new feature in V5.3
warrents new consideration.  My new consideration is to stop using DIRMC
pools as the reason they were created in the first place has also long been
fixed.
Which reason is this that has been fixed?  How about quick restoration of
ACLed directories (still an issue, no?)...
Paul


Re: dsm scheduler on windows

2005-03-15 Thread Paul Fielding
Actually, this brings up a good question that I've never properly tested.
When running a scheduler service *without* an Acceptor service, then you
most definitely need to restart the service for changes to take place.
But how about when running an Acceptor service?  It makes sense to me that
the Acceptor would need to be restarted whenever an option is changed that
the acceptor must deal with, eg. tcpserveraddress, schedmode, etc.  Some
options, however, are only relevant to the scheduler service itself, such as
exclude lists, dirmc, domain, etc.
Since the scheduler service is started at the appropriate time by the
acceptor, does this imply that the acceptor does not need to be restarted
for these options?My personal belief is no - it doesn't need to be
restarted if the changed option is one that the acceptor doesn't touch.  I
haven't tested this properly, however.  Anyone proved/disproved this?
Related Note:  When changing items in a Cloptset, is there any requirement
to restart a client scheduler?  Documentation doesn't make this clear.
Again, my belief is no, it doesn't need to be done, however I have not
tested this
regards,
Paul
- Original Message -
From: Rushforth, Tim [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Monday, March 14, 2005 3:54 PM
Subject: Re: [ADSM-L] dsm scheduler on windows

If you make a change in dsm.opt you have to restart the schedule service
for scheduled operations to pick up the change.
The schedule service reads the option file at startup.
-Original Message-
From: Mike [mailto:[EMAIL PROTECTED]
Sent: Monday, March 14, 2005 4:03 PM
To: ADSM-L@VM.MARIST.EDU
Subject: dsm scheduler on windows
Trying to set up an easier way to manage the configuration
(dsm.opt) on windows files. One comment mentioned today is
that anytime the dsm.opt changes the dsm sched service must
be cycled. Is that true? I thought only the windows equivilant
program to dsmc would read dsm.opt and as such it is the
only think that might need cycling or tweaking of dsm.opt changes.
What's the real answer?
Mike


Re: TSM 5.3 License wizards not being displayed in management console

2005-03-05 Thread Paul Fielding
- Original Message -
From: steve freeman [EMAIL PROTECTED]
Hi Paul,
Thanks for this advice. I will installed via the admin cmd line
And disregard what the installation guide says.
That's true, the Admin Guide still says to use the Licensing Wizard. The
Admin Guide is indeed wrong.  They just haven't updated the documentation
yet... :)  Using the draft 5.3 technical guide is the way to go...
regards,
Paul


Re: TSM 5.3 License wizards not being displayed in management console

2005-03-04 Thread Paul Fielding
Hi Steve,
As far as I've been able to tell, the License Wizard doesn't exist in 5.3.
They've finally started to update licensing to better reflect the current
state of TSM affairs.  You no longer license by node, etc.   At the moment,
the only way to license is by using the 'register license' command (see help
register license).   You can find more details about how licensing has
changed in the current TSM 5.3 Technical Guide Redbook Draft, currently
found at:
http://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/sg246638.html?Open
On page 69 (or 101 of the pdf).
regards,
Paul
- Original Message -
From: steve freeman [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Friday, March 04, 2005 7:13 AM
Subject: [ADSM-L] TSM 5.3 License wizards not being displayed in management
console

Hello TSMers.

I have installed TSM5.3 on W2003 and installed ISC - all ok, except that
the management console does not display
the license wizard when they have been installed. I have deinstalled and
reinstall, and they do not show . I have stopped and
restarted the TSM server and rebooted the windows 2003 server.

Has anyone else experienced this issue and any work around or fixes.




Initial problems and bugs with ISC (long)

2005-03-03 Thread Paul Fielding
Ok IBM, here's my initial ISC findings for you.  There'll probably be more down 
the road, I'm sure

Bugs/Annoyances:

1. When setting the dbbackup trigger, the ISC doesn't let you set 0 
incrementals between fulls.  If I want to have every dbbbackuptriggered 
dbbackup run as a full, I need to set the trigger from the command line.

2. When adding node associations to a client schedule, the final list of nodes 
displayed is only wide enough for nodenames roughly 8 characters wide.  Any 
decently long nodename gets wrapped, making it a pain in the butt to read.

3. The Actlog is a pain in the butt to read.  Either give us back a clean 
window that can display standard actlog output (as per the old web interface) 
or at least fix the actlog display so that it
  a) has the timestamp first, instead of way off to the left where one has to 
scroll to see it, 
  b) let us choose a begint
  c) show us *all* entries - currently it seems to me that not all messages get 
displayed.

4. When looking at a client schedule: if there are nodes associated with the 
schedule, the ISC displays only the nodes that are associated (good).  If there 
are *no* nodes associated with the schedule, it displays *all* nodes that exist 
(bad).  It gives the impression that all of those nodes are associated.

5. **BIG BUG**  Management Classes.  The new ISC philosophy seems to be that we 
now hide the existence of Policy Sets and Copy Groups from the end user.  
Policy sets are completely hidden, and copy groups are simply shown as values 
that can be set within a mgmt class.  Ok, I can live with that.  Except that 
it's inconsistent and in one case genuinely wrong.  In order to hide policy 
sets from the end user, the ISC needs (and tries) to validate and activate the 
STANDARD policyset after changes have been made to the mgmt class/copy group.  
If we ignore the fact that one probably shouldn't be blindly letting the ISC 
validate and activate the policy set, we cannot igore the fact that it doesn't 
always do it.

When you make changes to a mgmt class/copy group, the ISC automatically 
validates/activates the policy set.

However, when you *add* or *remove* a mgmt class/copy group, the ISC does *NOT* 
validate/activate the policy set.  The mgmtclass is added or removed from 
STANDARD, but ACTIVE is untouched.  Worse yet, the ISC happily declares that 
your changes were successful and then displays your changes.  ie. it displays 
what the STANDARD policy set is doing, not what the ACTIVE one is doing.  This 
gives you the false impression that your changes are active.

Fortunately, there is one single place in the ISC where policy sets are 
mentioned - the teeny drop down option under Policy Domains that says Activate 
Policy.  Of course,  this is the one case where rather than being it's usually 
over wordy self it instead explains none of this to the end user.   If you use 
this action, the policy set will indeed be activated.

One of two things needs to happen: a) val/act when adding or removing a mgmt 
class, or b) never val/act and instead make it more clear that this needs to be 
done by the user after making changes.

If IBM fixes nothing else on my above list, this issue must, must, must be 
addressed.

6. Comand line.   Please - give us back a usable command line from within the 
ISC.  The great thing about the command line in the old web interface was that 
it sat at the bottom of the browser, out of the way when you were pointin' and 
clickin' around the GUI, but was right-there-now when you wanted to enter a 
command, and the results showed up nice and big in a grand window that was wide 
enough to show the output and you could scroll down easily to see the results.  
The command line interface in the ISC is in a dumb, annoying to reach place - 
you need to make a million clicks to find it, and then it's annoying to use 
when you get there.

The GUI has some advantages, but sometimes there's just no substitute for being 
able to fire off a few ad-hoc commands.

7. The FINISH button.  Every other wizard in existence, after asking you all 
your questions, gives you a FINISH button - at which point you may either click 
on FINISH to make the action take place, or you can cancel out, knowing you 
haven't changed anything.   The ISC wizards actually do their action after you 
press an arbitrary NEXT button, and only displays FINISH *after* the action has 
taken place.   You don't get that last ditch chance to not make your change.  
Not very intuitive.



I'm sure I'll have more, but these are all I can think of right now.  I hope 
somebody passes these on to the right people...

later,

Paul


Re: Initial problems and bugs with ISC (long)

2005-03-03 Thread Paul Fielding
- Original Message -
From: Stapleton, Mark [EMAIL PROTECTED]
If you *really* want to make yourself heard, send your comments to your
local IBM/IBM Business Partner rep. That will make a whole lot more
impact.
Actually, over the years as an IBM Business Partner, I've found that my
voice typically falls on deaf ears when I try to tell IBM.  I've found that
those IBMers who follow this list out of their own interest generally seem
to be better at getting the information to the places where it matters
most
regards,
Paul


ISC in Perspective (Was: Re: [ADSM-L] Old GUI admin in 5.3?)

2005-03-03 Thread Paul Fielding
- Original Message -
From: Sam Sheppard [EMAIL PROTECTED]
With all of the 'horror' stories on the list about 5.3 and the ISC, I
was wondering if the old GUI from Version 3 will still work, at least to
the extent that it does in 5.2?
To be fair, for all my gripes about the ISC, I don't think it's so bad that
people need to consider not using it.  It has it's bugs and problems that
need to be worked out, sure.  And it's not exactly the most intuitive
interface in the world by a major stretch.  But then again, the old web
interface was not the most intuitive, either.  In some areas, the old web
interface was far better.  However, in other ways the ISC is much better
than the old web interface.
In either case, it is clear that the best approach is to use a bit of
GUIness along with a bit of CLIness to obtain the (somewhat) best of both
worlds.   In liu of an intuitive interface, one just needs to take some
extra time to understand what works well and what doesn't work well in the
ISC, and where everything is.  It's unfortunate, but by no means a deal
breaker.   After spending some time on the ISC, with a CLI at my side as
well, I was quickly navigating TSM 5.3 as I would any other version of TSM.
Hopefully IBM will listen and make (many) improvements to the ISC so that
the worst of our issues may be dealt with...
regards,
Paul


Re: Bug? Using multiple client service instances on Windows server

2005-02-16 Thread Paul Fielding
I agree - it's not a show stopper, as long as you know what you're looking for
it can be worked around.

I've gotten tied up today with a raid array failure so I haven't had a chance
yet to try my findings on a win2003 box - I'd still like to demonstrate that,
since my original findings (and the cause of this thread) were indeed very
different than what I experienced last night on the XP box, so i'm not 100%
convinced yet that the server type isn't an issue.

I'll post as soon as I have had a chance to try it...

regards,

Paul

Quoting Andrew Raibeck [EMAIL PROTECTED]:

 Paul, thanks for all the detail. I was able to reproduce your findings; I
 don't think the TSM server version makes a difference.

 The only thing that seems odd to me is the connection as MATHILDA when
 BOOGA is being configured. Off the top of my head, the obvious answer is
 that the current instance of dsm.exe is running as node MATHILDA, and
 before completing the service configuration, the dsm.exe instance wants to
 authenticate with the server (as MATHILDA). But further investigation is
 required to confirm that this is by design. Looks to me like a minor
 annoyance at most, though.

 Regards,

 Andy

 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
 Internet e-mail: [EMAIL PROTECTED]

 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-02-15
 21:44:13:

  Hi Andy,
 
  This is interesting. (Warning, long message)
 
  I tried doing this tonight on my XP SP2 system with 5.3.0 client to
  demonstrate, since I don't have a server handy (will try with 2003
 server
  tommorow hopefully).
 
  I didn't see the behavior I previously described.  But the behavior I
 did
  see was even more interesting.  Below I'll describe the steps, in order.
  In
  the places where I break out into console log entries, this is the point
  during the install where that log item showed up in the console.
 
  Here's what I did:
  XP SP2 system with TSM 5.3.0 installed, no services installed.
  Two nodes - MATHILDA and BOOGA  (hey, it's late)
  Two optfiles:
 
  dsm.opt, default location:
nodename mathilda
PASSWORDACCESS GENERATE
schedmode prompted
TCPSERVERADDRESS x
schedlogretention 7
errorlogretention 7
MANAGEDSERVICES WEBCLIENT SCHEDULE
 
  dsm2.opt, default location:
nodename booga
PASSWORDACCESS GENERATE
schedmode prompted
TCPSERVERADDRESS x
schedlogretention 7
errorlogretention 7
MANAGEDSERVICES WEBCLIENT SCHEDULE
 
  A. Install Acceptor #1
  1. Startup GUI
  2. Setup Wizard, check both install web client and scheduler
  3. Install new acceptor
  4. Named TSM Client Acceptor (default name)
  5. default dsm.opt file (in the tsm\baclient directory)
 
  --console log--
  ANR0406I Session 6664 started for node MATHILDA (WinNT) (Tcp/Ip
  142.163.252.132(4496)).
  ANR0403I Session 6664 ended for node MATHILDA (WinNT).
  --end console log--
 
  6. http port 1581
  7. Node MATHILDA, Pass MATHILDA, check validation
  8. Start with System account, start manually
  9. Named TSM Remote Client Agent (default name)
  10. No revocation
  11. Do not start now
  12. Finish.
 
  --console log--
  ANR0406I Session 6665 started for node MATHILDA (WinNT) (Tcp/Ip
  142.163.252.132(4510)).
  ANR0403I Session 6665 ended for node MATHILDA (WinNT).
  ANR0406I Session  started for node MATHILDA (WinNT) (Tcp/Ip
  142.163.252.132(4511)).
  ANR0403I Session  ended for node MATHILDA (WinNT).
  --end console log--
 
  13. Popup indicates installed.
 
  B. Install Scheduler #1
  1. Install new scheduler
  2. Named TSM Client Scheduler (default name), Local, with CAD
  3. Select Acceptor defined above
  4. leave log names blank to take default, uncheck event logging
  5. Finish.
 
  --console log--
  ANR0406I Session 6667 started for node MATHILDA (WinNT) (Tcp/Ip
  142.163.252.132(4516)).
  ANR0403I Session 6667 ended for node MATHILDA (WinNT).
  ANR0406I Session 6668 started for node MATHILDA (WinNT) (Tcp/Ip
  142.163.252.132(4517)).
  ANR0403I Session 6668 ended for node MATHILDA (WinNT).
  ANR0406I Session 6669 started for node MATHILDA (WinNT) (Tcp/Ip
  142.163.252.132(4518)).
  ANR0403I Session 6669 ended for node MATHILDA (WinNT).
  --end console log--
 
  6. Popup indicates Installed.
 
  C. Install Acceptor #2
  1. Startup GUI
  2. Setup Wizard, check both install web client and scheduler
  3. Install new acceptor
  4. Named TSM Client Acceptor - BOOGA
  5. dsm2.opt file (in the tsm\baclient directory)
 
  --console log--
  ANR0406I Session 6670 started for node MATHILDA (WinNT) (Tcp/Ip
  142.163.252.132(4519)).
  ANR0403I Session 6670 ended for node MATHILDA (WinNT).
  --end console log--
 
  (note that it was Mathilda that just connected, not Booga, even though
 we
  just 

Re: Bug? Using multiple client service instances on Windows server

2005-02-16 Thread Paul Fielding
I haven't tried that.  I'll try it and see what happens.  I don't think I
should have to do it, but hey, if it works... *grin*
Paul
- Original Message -
From: TSM_User [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, February 16, 2005 2:25 PM
Subject: Re: [ADSM-L] Bug? Using multiple client service instances on
Windows server

Paul, I was just curious if you tried to run the dsmcutil updatepw
command inbetween the creation of the two services for the BOOGA node.  In
looking at all my scripts I make sure to do this.  Of course it is so that
I don't have to use dsmc to set the password locally but it might also be
something that I do differently which might be why I haven't run into
this.
Paul Fielding [EMAIL PROTECTED] wrote:I agree - it's not a show stopper,
as long as you know what you're looking for
it can be worked around.
I've gotten tied up today with a raid array failure so I haven't had a
chance
yet to try my findings on a win2003 box - I'd still like to demonstrate
that,
since my original findings (and the cause of this thread) were indeed very
different than what I experienced last night on the XP box, so i'm not
100%
convinced yet that the server type isn't an issue.
I'll post as soon as I have had a chance to try it...
regards,
Paul
Quoting Andrew Raibeck :
Paul, thanks for all the detail. I was able to reproduce your findings; I
don't think the TSM server version makes a difference.
The only thing that seems odd to me is the connection as MATHILDA when
BOOGA is being configured. Off the top of my head, the obvious answer is
that the current instance of dsm.exe is running as node MATHILDA, and
before completing the service configuration, the dsm.exe instance wants
to
authenticate with the server (as MATHILDA). But further investigation is
required to confirm that this is by design. Looks to me like a minor
annoyance at most, though.
Regards,
Andy
Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]
The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.
ADSM: Dist Stor Manager wrote on 2005-02-15
21:44:13:
 Hi Andy,

 This is interesting. (Warning, long message)

 I tried doing this tonight on my XP SP2 system with 5.3.0 client to
 demonstrate, since I don't have a server handy (will try with 2003
server
 tommorow hopefully).

 I didn't see the behavior I previously described. But the behavior I
did
 see was even more interesting. Below I'll describe the steps, in order.
In
 the places where I break out into console log entries, this is the
 point
 during the install where that log item showed up in the console.

 Here's what I did:
 XP SP2 system with TSM 5.3.0 installed, no services installed.
 Two nodes - MATHILDA and BOOGA (hey, it's late)
 Two optfiles:

 dsm.opt, default location:
 nodename mathilda
 PASSWORDACCESS GENERATE
 schedmode prompted
 TCPSERVERADDRESS x
 schedlogretention 7
 errorlogretention 7
 MANAGEDSERVICES WEBCLIENT SCHEDULE

 dsm2.opt, default location:
 nodename booga
 PASSWORDACCESS GENERATE
 schedmode prompted
 TCPSERVERADDRESS x
 schedlogretention 7
 errorlogretention 7
 MANAGEDSERVICES WEBCLIENT SCHEDULE

 A. Install Acceptor #1
 1. Startup GUI
 2. Setup Wizard, check both install web client and scheduler
 3. Install new acceptor
 4. Named TSM Client Acceptor (default name)
 5. default dsm.opt file (in the tsm\baclient directory)

 --console log--
 ANR0406I Session 6664 started for node MATHILDA (WinNT) (Tcp/Ip
 142.163.252.132(4496)).
 ANR0403I Session 6664 ended for node MATHILDA (WinNT).
 --end console log--

 6. http port 1581
 7. Node MATHILDA, Pass MATHILDA, check validation
 8. Start with System account, start manually
 9. Named TSM Remote Client Agent (default name)
 10. No revocation
 11. Do not start now
 12. Finish.

 --console log--
 ANR0406I Session 6665 started for node MATHILDA (WinNT) (Tcp/Ip
 142.163.252.132(4510)).
 ANR0403I Session 6665 ended for node MATHILDA (WinNT).
 ANR0406I Session  started for node MATHILDA (WinNT) (Tcp/Ip
 142.163.252.132(4511)).
 ANR0403I Session  ended for node MATHILDA (WinNT).
 --end console log--

 13. Popup indicates installed.

 B. Install Scheduler #1
 1. Install new scheduler
 2. Named TSM Client Scheduler (default name), Local, with CAD
 3. Select Acceptor defined above
 4. leave log names blank to take default, uncheck event logging
 5. Finish.

 --console log--
 ANR0406I Session 6667 started for node MATHILDA (WinNT) (Tcp/Ip
 142.163.252.132(4516)).
 ANR0403I Session 6667 ended for node MATHILDA (WinNT).
 ANR0406I Session 6668 started for node MATHILDA (WinNT) (Tcp/Ip
 142.163.252.132(4517)).
 ANR0403I Session 6668 ended for node MATHILDA (WinNT).
 ANR0406I Session 6669 started for node MATHILDA (WinNT) (Tcp/Ip
 142.163.252.132(4518)).
 ANR0403I Session 6669 ended for node MATHILDA

Re: Problem editing scripts from the Admin Centre

2005-02-15 Thread Paul Fielding
FWIW, I've had better luck posting bugs on this list, where people from IBM
actually see it occasionally, than I have had actually trying to go through
proper channels to report a bug


Quoting Steve Harris [EMAIL PROTECTED]:

 Hi all,

 Here's a trap that you might like to avoid.

 When editing a script using the TSM 5.3 admin centre, if the script contains
  or  characters these are replaced by their equivalent HTML escapes gt;
 and lt;

 I'd suggest that you not edit scripts via the admin centre.

 Regards

 Steve

 Steve Harris
 TSM Admin
 Queensland Health, Brisbane Australia

 (for anyone who cares, I tried to report this as a bug using Passport
 Advantage, but as usual found that tool to be completely inadequate for the
 simplest task)





***
 This email, including any attachments sent with it, is confidential and for
 the sole use of the intended recipient(s).  This confidentiality is not
 waived or lost, if you receive it and you are not the intended recipient(s),
 or if it is transmitted/received in error.

 Any unauthorised use, alteration, disclosure, distribution or review of this
 email is prohibited.  It may be subject to a statutory duty of
 confidentiality if it relates to health service matters.

 If you are not the intended recipient(s), or if you have received this email
 in error, you are asked to immediately notify the sender by telephone or by
 return email.  You should also delete this email and destroy any hard copies
 produced.


***



--
[EMAIL PROTECTED]
http://www.fielding.ca

-
This mail sent through IMP: http://horde.org/imp/


Re: Using multiple client service instances on Windows server

2005-02-15 Thread Paul Fielding
Quoting Steve Harris [EMAIL PROTECTED]:

 Paul,

 When you install a second windows service, how do you automate the install of
 a second set of icons into the start menu? I'm struggling with this at the
 moment

I'm afraid I haven't found a way to automate that process - I just manually
copy a shortcut, and go in and add the -optfile flag to it.   Not exactly
conducive to a silent-install rollout... :(  Fortunately, generally the only
systems I ever need to do a dual-node install on are exception boxes rather
than the norm (ie. clusters, exchange, sql, etc)

regards,

Paul




 and while I'm asking windows questions :)  (server and client version
 5.3.0)

 I don't like the Windows MMC interface, and far prefer to use two sessions,
 one for command line entry and the other to watch the log.  I'm rolling out
 an implementation that has a strategic and a tactical backup server
 instance at each site to a whole new set of green  admins so I'd like to make
 the set up as easy as possible for them. With four open windows on the screen
 it gets a bit confusing as to which is which... When the admin client comes
 up it overwrites the window title with IBM Tivoli Storage Manager is there
 any way to change this to something more descriptive?

 Thanks

 Steve

 Steve Harris
 TSM Admin
 Queensland Health, Brisbane Australia

  [EMAIL PROTECTED] 15/02/2005 13:00:42 
 I have things configured pretty much as you describe, and I also use
 dsmcutil to create the services when using a cluster- Way easier to reduce
 mistakes since I can throw it into a batch file and run it on both sides of
 the cluster.  :)

 The issue I'm seeing, though, can be duplicated on a non-cluster.  (however
 the results seem to happen on some systems but not others) If you take a
 Windows 2000 or 2003 server and try the following:

 1. Install a regular dsmcad, agent and scheduler service using the default
 baclient\dsm.opt file.
 2. create a second options file named something different such as dsm2.opt
 3. install a second set of services, named differently, and using the
 dsm2.opt, and using a different nodename from the first set.  (ie. you
 would
 use this if setting up a scheduler for an agent perhaps, or for a cluster
 resource group).
 4. before starting the dsmcad service, start up a console window
 (dsmadmc -console)
 5. start dsmcad, wait the minute for the scheduler to kick in

 What I see on the console (and in the actlog) is:
 - an inital connection using the correct (second) nodename, by the dsmcad
 as
 soon as I start the service
 - 1 minute later, I see two more client connections as the scheduler
 connects.  the first connection uses the wrong (first) nodename, the second
 connection uses the correct (second) nodename.

 other than that, everything seems to work correctly.

 Paul

 - Original Message -
 From: TSM_User [EMAIL PROTECTED]
 To: ADSM-L@VM.MARIST.EDU
 Sent: Monday, February 14, 2005 9:58 PM
 Subject: Re: [ADSM-L] Bug? Using multiple client service instances on
 Windows server


  We have over 20 Windows 2000 Cluster servers. On all of these servers we
  have to create 2 sets of all the services. One for the local drive and
 one
  for the cluster.  We have never run into the issue you are speaking of.
  We use the dsmcutil command to create all our servers via scripting.  The
  only issue I have ever seen is that if you don't use the short 8.3 name
  for the path for /clientdir then you can have problems.
 
  I'm not sure if this will help but here is an example of what we use.
  ex:
  C:\Program Files\Tivoli\TSM\Baclient\DSMCUTIL Install /name:TSM
 Central
  Scheduler /node:%COMPUTERNAME%
 /clientdir:C:\Progra~1\Tivoli\TSM\Baclient
  /optfile:C:\Progra~1\Tivoli\TSM\Baclient\dsm.opt /pass:%COMPUTERNAME%
  /startnow:no /autostart:no
 
  One thing I have noticed is if you ever create services on a cluster you
  must ensure that you create them adding the /clustername and /clusternode
  options.  Also, you have to use the /clusternode:no for the services you
  create that aren't for the cluster.  Finally you also have to make sure
  that you create the cluster services first.  If you don't do this
  correctly you will get errors but they aren't all as clear as I would
  like.
 
  Paul Fielding [EMAIL PROTECTED] wrote:
  Several years ago I noticed an interesting behavior when installing
  multiple client scheduler services on a server. A ticket was opened with
  IBM and the final word came back that there was indeed a bug, the apar
 was
  opened, and we were told it would be resolved. This week I've encoutered
  the same situation, so I'm wondering if anyone has also noticed this
  behavior? I no longer have the apar number of the original ticket, so I
  can't check to see the apar's status.
 
  When installing a scheduler service (with apropriate cad, etc) you must
  supply the dsm.opt file fo the service to use. For the first nodename on
  the server, this is typically the Tivoli\TSM\baclient\dsm.opt file

Re: Bug? Using multiple client service instances on Windows server

2005-02-15 Thread Paul Fielding
 (and in the actlog) is:
- an inital connection using the correct (second) nodename, by the
dsmcad as
soon as I start the service
- 1 minute later, I see two more client connections as the scheduler
connects.  the first connection uses the wrong (first) nodename, the
second
connection uses the correct (second) nodename.
other than that, everything seems to work correctly.
Paul
- Original Message -
From: TSM_User [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Monday, February 14, 2005 9:58 PM
Subject: Re: [ADSM-L] Bug? Using multiple client service instances on
Windows server
 We have over 20 Windows 2000 Cluster servers. On all of these servers
we
 have to create 2 sets of all the services. One for the local drive and
one
 for the cluster.  We have never run into the issue you are speaking
of.
 We use the dsmcutil command to create all our servers via scripting.
The
 only issue I have ever seen is that if you don't use the short 8.3
name
 for the path for /clientdir then you can have problems.

 I'm not sure if this will help but here is an example of what we use.
 ex:
 C:\Program Files\Tivoli\TSM\Baclient\DSMCUTIL Install /name:TSM
Central
 Scheduler /node:%COMPUTERNAME%
/clientdir:C:\Progra~1\Tivoli\TSM\Baclient
 /optfile:C:\Progra~1\Tivoli\TSM\Baclient\dsm.opt /pass:%COMPUTERNAME%
 /startnow:no /autostart:no

 One thing I have noticed is if you ever create services on a cluster
you
 must ensure that you create them adding the /clustername and
/clusternode
 options.  Also, you have to use the /clusternode:no for the services
you
 create that aren't for the cluster.  Finally you also have to make
sure
 that you create the cluster services first.  If you don't do this
 correctly you will get errors but they aren't all as clear as I would
 like.

 Paul Fielding [EMAIL PROTECTED] wrote:
 Several years ago I noticed an interesting behavior when installing
 multiple client scheduler services on a server. A ticket was opened
with
 IBM and the final word came back that there was indeed a bug, the apar
was
 opened, and we were told it would be resolved. This week I've
encoutered
 the same situation, so I'm wondering if anyone has also noticed this
 behavior? I no longer have the apar number of the original ticket, so
I
 can't check to see the apar's status.

 When installing a scheduler service (with apropriate cad, etc) you
must
 supply the dsm.opt file fo the service to use. For the first nodename
on
 the server, this is typically the Tivoli\TSM\baclient\dsm.opt file.
When
 installing the second set of services for an alternate nodename, you
must
 supply an alternate dsm.opt file.

 If you run a dsmadmc -console while starting the CAD, you may notice
that,
 when the scheduler service contacts the TSM Server, it touches the
server
 twice. Under normal circumstances, this is just something I shrugged
off
 as an 'interesting' thing.

 However, after the second service instance is installed, when starting
up
 the CAD, I noticed that the the first of those two connections was
using
 the wrong nodename - instead of connecting to the TSM server with the
 nodename of the second service, it connected with the nodename of the
 first service. The second connection attempt then proceeded to use the
 correct nodename. Not knowing exactly what information is sent on each
of
 those connections, I do not know the implications of this.

 Basically what was happening was that when the scheduler service first
 starts it grabbed the default dsm.opt location, instead of using the
 dsm.opt file defined for that service. By the time it makes it's
second
 connection attempt, it's read the correct dsm.opt file.

 The temporary band-aid was to configure the first scheduler service to
use
 a *non-standard* dsm.opt - the result being that when the second
service
 tried to connect using the default location, it failed to find a
dsm.opt
 file there, and simply connected sucessfully on the second attempt,
using
 the correct dsm.opt file.

 More recently, I've noticed that when this situation occurs, if you
set
 the first service to use a non-standard dsm.opt file, during the
install
 process I initially get an error message stating that the service
'Could
 not find c:\Program Files\Tivoli\TSM\baclient\dsm.opt' , even though
 that's not the dsm.opt file I told it to read. The service then goes
and
 sucessfully installs. *shrug*.

 It doesn't appear to be causing any real grief, but I'm wondering if
I'm
 the only one seeing this behavior or not, and if anyone may know of
any
 genuine grief this could cause?

 regards,

 Paul


 -
 Do you Yahoo!?
 Yahoo! Search presents - Jib Jab's 'Second Term'




TCPCLIENTADDRESS in 5.3?

2005-02-14 Thread Paul Fielding
I'm wondering if I'm misunderstanding the use of the TCPCLIENTADDRESS dsm.opt
option.

Historically, I've used the TCPCLIENTADDRESS to force the client to present an
appropriate contact IP to the TSM server if the client has more than one nic,
or when using a cluster configuration.

I've just finished implementing the 5.3 client on a Microsoft Cluster, and used
TCPCLIENTADDRESS to specify the shared IP for the resource group that the
shared TSM service is in.

However, I just noticed that on the server side after connection have been
made, that when I do a 'q node' on the shared node, the TCP address listed is
the non-shared IP as opposed to the one I specified.

Is this appropriate behavior, or is the TSM server/client not following the
rules?

It's not the end of the world, since the dsmcad will at least send *one* of the
correct IPs for the side of the cluster it's on when it starts up, but I'd
rather it was presenting the shared IP instead of the localized IP...

regards,

Paul

-
This mail sent through IMP: http://horde.org/imp/


Re: Question from a non-TSM person

2005-02-14 Thread Paul Fielding
I agree, this should work fine.  I think the thing that previous posters were
commenting on (saying that the db would become corrupted), that wasn't made
clear in the original question, is that you want to make sure you shut down the
primary TSM server prior to breaking the mirror.   As long as you do that, I
don't see why it shouldhn't work

Paul

Quoting Rushforth, Tim [EMAIL PROTECTED]:

 We've used a similar procedure in the past for quicker upgrades and a
 quick backout.

 We would mirror the TSM Logs and DB on separate physical disk drives.
 These disk drives were internal to our server and we have another server
 that is the same so the upgrade consisted of (simplified):

 Shutting down TSM on old server, swap drives, bring up new server, run
 upgrade, re-mirror DB and log.

 If any problems, the original server is still functional.

 -Original Message-
 From: fred johanson [mailto:[EMAIL PROTECTED]
 Sent: Thursday, February 10, 2005 4:24 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Question from a non-TSM person

 I was asked if it was possible to move TSM from machine to machine by
 mirroring the DB and log, breaking the mirror, moving the mirrored
 portions
 to a new machine, adding the old mirror volumes to dsmserv.dsk, and
 bringing up the new machine as if nothing happened?



 Fred Johanson
 ITSM Administrator
 University of Chicago
 773-702-8464



--
[EMAIL PROTECTED]
http://www.fielding.ca

-
This mail sent through IMP: http://horde.org/imp/


Re: Moving Node Data Between Storage Pools

2005-02-14 Thread Paul Fielding
Try this:

- Upd node to new domain.  TSM will not magically migrate data, but it will
know to restore previous data from old tape pool etc.
- MOVE NODEDATA to move data for that node only from old stgpool to new stgpool

good luck!

Paul


Quoting Hart, Charles [EMAIL PROTECTED]:

 We have a windows client domain that writes to its own tape pool for about
 200+ Windows clients (Collocation On Primary Tape Pool / Off on Copy Pool).
 We would like to split the clients out in to additional domains for restore
 purposes.

 The question I have is how do move the data of client A in Domain A to a new
 Domain with a new tape pool.

 Options
 Move Data moves backup data at volume level not client specific.
 Export / Import Node is only for server to server...
 Upd node to new domain and hope TSM will Magically migrate data?
 Upd node to new domain and hope TSM will know to restore previous data from
 old tape pool etc?
 Make New Domain's backup Copy Pool Absolute to force a full? Then let old
 data drop off?

 Appreciate any input.

 Regards,

 Charles



--
[EMAIL PROTECTED]
http://www.fielding.ca

-
This mail sent through IMP: http://horde.org/imp/


Re: Error message when trying to install server to admin center

2005-02-14 Thread Paul Fielding
I think there's cross-communication going on here.  Clarification is needed.
Is the the *only* TSM server you're trying to add, and the ISC is claiming
you've already added it, or is this an *additional* TSM server that you're
trying to add as a connection to the ISC?
regards,
Paul
- Original Message -
From: Timothy Hughes [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Monday, February 14, 2005 3:11 PM
Subject: Re: [ADSM-L] Error message when trying to install server to admin
center

The message seems self-expanatory but why doesn't the server
show up When I click on Health monitor? or anything else?
if it's been added?

Andrew Raibeck wrote:
I'm not certain how knowing whether anyone else has come across this
message is of any particular use. The message seems self-explanatory. Is
there a more specific problem or question that you have?
Regards,
Andy
Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]
The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-02-14
11:13:37:
 Hello has anyone who installed TSM 5.3 come across this message when
 trying to add a server to the Admin Center?

 The specified server has been contacted and it has a server name of
 TSM.

 This server connection name has already been added to the
 Administration
 Center.
 Every server added to the Administration Center must have a unique
 name.
 To change
 the server name, use the server console or a Tivoli Storage Manager
 administrative client.


 The server is not showing up as being added to the Admin Center.


 Thanks in advance!

 AIX 5.2
 TSM 5.3.0.1



Re: Bug? Using multiple client service instances on Windows server

2005-02-14 Thread Paul Fielding
I have things configured pretty much as you describe, and I also use
dsmcutil to create the services when using a cluster- Way easier to reduce
mistakes since I can throw it into a batch file and run it on both sides of
the cluster.  :)
The issue I'm seeing, though, can be duplicated on a non-cluster.  (however
the results seem to happen on some systems but not others) If you take a
Windows 2000 or 2003 server and try the following:
1. Install a regular dsmcad, agent and scheduler service using the default
baclient\dsm.opt file.
2. create a second options file named something different such as dsm2.opt
3. install a second set of services, named differently, and using the
dsm2.opt, and using a different nodename from the first set.  (ie. you would
use this if setting up a scheduler for an agent perhaps, or for a cluster
resource group).
4. before starting the dsmcad service, start up a console window
(dsmadmc -console)
5. start dsmcad, wait the minute for the scheduler to kick in
What I see on the console (and in the actlog) is:
- an inital connection using the correct (second) nodename, by the dsmcad as
soon as I start the service
- 1 minute later, I see two more client connections as the scheduler
connects.  the first connection uses the wrong (first) nodename, the second
connection uses the correct (second) nodename.
other than that, everything seems to work correctly.
Paul
- Original Message -
From: TSM_User [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Monday, February 14, 2005 9:58 PM
Subject: Re: [ADSM-L] Bug? Using multiple client service instances on
Windows server

We have over 20 Windows 2000 Cluster servers. On all of these servers we
have to create 2 sets of all the services. One for the local drive and one
for the cluster.  We have never run into the issue you are speaking of.
We use the dsmcutil command to create all our servers via scripting.  The
only issue I have ever seen is that if you don't use the short 8.3 name
for the path for /clientdir then you can have problems.
I'm not sure if this will help but here is an example of what we use.
ex:
C:\Program Files\Tivoli\TSM\Baclient\DSMCUTIL Install /name:TSM Central
Scheduler /node:%COMPUTERNAME% /clientdir:C:\Progra~1\Tivoli\TSM\Baclient
/optfile:C:\Progra~1\Tivoli\TSM\Baclient\dsm.opt /pass:%COMPUTERNAME%
/startnow:no /autostart:no
One thing I have noticed is if you ever create services on a cluster you
must ensure that you create them adding the /clustername and /clusternode
options.  Also, you have to use the /clusternode:no for the services you
create that aren't for the cluster.  Finally you also have to make sure
that you create the cluster services first.  If you don't do this
correctly you will get errors but they aren't all as clear as I would
like.
Paul Fielding [EMAIL PROTECTED] wrote:
Several years ago I noticed an interesting behavior when installing
multiple client scheduler services on a server. A ticket was opened with
IBM and the final word came back that there was indeed a bug, the apar was
opened, and we were told it would be resolved. This week I've encoutered
the same situation, so I'm wondering if anyone has also noticed this
behavior? I no longer have the apar number of the original ticket, so I
can't check to see the apar's status.
When installing a scheduler service (with apropriate cad, etc) you must
supply the dsm.opt file fo the service to use. For the first nodename on
the server, this is typically the Tivoli\TSM\baclient\dsm.opt file. When
installing the second set of services for an alternate nodename, you must
supply an alternate dsm.opt file.
If you run a dsmadmc -console while starting the CAD, you may notice that,
when the scheduler service contacts the TSM Server, it touches the server
twice. Under normal circumstances, this is just something I shrugged off
as an 'interesting' thing.
However, after the second service instance is installed, when starting up
the CAD, I noticed that the the first of those two connections was using
the wrong nodename - instead of connecting to the TSM server with the
nodename of the second service, it connected with the nodename of the
first service. The second connection attempt then proceeded to use the
correct nodename. Not knowing exactly what information is sent on each of
those connections, I do not know the implications of this.
Basically what was happening was that when the scheduler service first
starts it grabbed the default dsm.opt location, instead of using the
dsm.opt file defined for that service. By the time it makes it's second
connection attempt, it's read the correct dsm.opt file.
The temporary band-aid was to configure the first scheduler service to use
a *non-standard* dsm.opt - the result being that when the second service
tried to connect using the default location, it failed to find a dsm.opt
file there, and simply connected sucessfully on the second attempt, using
the correct dsm.opt file.
More recently, I've noticed that when

Bug? Using multiple client service instances on Windows server

2005-02-13 Thread Paul Fielding
Several years ago I noticed an interesting behavior when installing multiple 
client scheduler services on a server.  A ticket was opened with IBM and the 
final word came back that there was indeed a bug, the apar was opened, and we 
were told it would be resolved.   This week I've encoutered the same situation, 
so I'm wondering if anyone has also noticed this behavior?  I no longer have 
the apar number of the original ticket, so I can't check to see the apar's 
status.

When installing a scheduler service (with apropriate cad, etc) you must supply 
the dsm.opt file fo the service to use.  For the first nodename on the server, 
this is typically the Tivoli\TSM\baclient\dsm.opt file.  When installing the 
second set of services for an alternate nodename, you must supply an alternate 
dsm.opt file.

If you run a dsmadmc -console while starting the CAD, you may notice that, when 
the scheduler service contacts the TSM Server, it touches the server twice.  
Under normal circumstances, this is just something I shrugged off as an 
'interesting' thing.

However, after the second service instance is installed, when starting up the 
CAD, I noticed that the the first of those two connections was using the wrong 
nodename - instead of connecting to the TSM server with the nodename of the 
second service, it connected with the nodename of the first service.  The 
second connection attempt then proceeded to use the correct nodename.   Not 
knowing exactly what information is sent on each of those connections, I do not 
know the implications of this.

Basically what was happening was that when the scheduler service first starts 
it grabbed the default dsm.opt location, instead of using the dsm.opt file 
defined for that service.  By the time it makes it's second connection attempt, 
it's read the correct dsm.opt file.

The temporary band-aid was to configure the first scheduler service to use a 
*non-standard* dsm.opt - the result being that when the second service tried to 
connect using the default location, it failed to find a dsm.opt file there, and 
simply connected sucessfully on the second attempt, using the correct dsm.opt 
file.

More recently, I've noticed that when this situation occurs, if you set the 
first service to use a non-standard dsm.opt file, during the install process I 
initially get an error message stating that the service 'Could not find 
c:\Program Files\Tivoli\TSM\baclient\dsm.opt' , even though that's not the 
dsm.opt file I told it to read.  The service then goes and sucessfully 
installs.  *shrug*.

It doesn't appear to be causing any real grief, but I'm wondering if I'm the 
only one seeing this behavior or not, and if anyone may know of any genuine 
grief this could cause?

regards,

Paul


exclude vs. exclude.backup

2005-02-11 Thread Paul Fielding
With the advent of the 5.3 Windows Client, the install defaults to includeing a
number of system excludes for the server.  This is fine and all, but one of the
things it does is separates them by exclude.backup and exclude.archive,
repeating the same filespec for each.

My belief has always been that exclude would exclude both backups and archives,
which would negate the need to exclude the same filespec with both
exclude.backup and exclude.archive.

Am I missing something?

regards,

Paul




-
This mail sent through IMP: http://horde.org/imp/


Re: exclude vs. exclude.backup

2005-02-11 Thread Paul Fielding
Ok, I just tested, and indeed the exclude statement by itself does not exclude
archives, forcing an exclude.archive will exclude them.  I'm surprised - I've
made a poor assumption over the years.  I wonder if anyone else has been caught
by that one?

regards,

Paul

Quoting [EMAIL PROTECTED]:

 With the advent of the 5.3 Windows Client, the install defaults to includeing
 a
 number of system excludes for the server.  This is fine and all, but one of
 the
 things it does is separates them by exclude.backup and exclude.archive,
 repeating the same filespec for each.

 My belief has always been that exclude would exclude both backups and
 archives,
 which would negate the need to exclude the same filespec with both
 exclude.backup and exclude.archive.

 Am I missing something?

 regards,

 Paul




 -
 This mail sent through IMP: http://horde.org/imp/



--
[EMAIL PROTECTED]
http://www.fielding.ca

-
This mail sent through IMP: http://horde.org/imp/


Re: exclude vs. exclude.backup

2005-02-11 Thread Paul Fielding
 object from all TSM operations, the product unfortunately departs from
 obviousness here, requiring the .archive
 qualifier to exclude from archiving as well, as has been consistently
 explained in the client manuals.

While the manuals do state that Exclude will exclude files from backup, and
independently the manuals do state that Exclude.archive will exclude files from
archive, it does not make it clear that Exclude by itself will *not* exclude
files from archive, and the fact that the Exclude.backup function exists, to my
mind, implys that Exclude (by itself) would not limit itself to backups, or in
other words it would successfully exclude archives.

For quite some times I've made an assumption based on this which turned out to
be a poor assumption.  This was only clarified by the new 5.3 install which
separates exclude.backups from exclude.archives in it's default option file
creation.

I agree whole heartedly that this is probably a side effect of 'evolving
features'.   Kind of like Policy Domains, where the single biggest use of
Policy Domains these days is to separate Agent data from regular backup data,
as opposed to what a Domain was originally designed for...

regards,

Paul


Quoting Richard Sims [EMAIL PROTECTED]:

 On Feb 11, 2005, at 7:27 AM, Paul Fielding wrote:

  ...My belief has always been that exclude would exclude both backups
  and archives,
  which would negate the need to exclude the same filespec with both
  exclude.backup and exclude.archive. ...

 While an unqualified Exclude should intuitively exclude a file system
 object from all TSM operations, the product unfortunately departs from
 obviousness here, requiring the .archive
 qualifier to exclude from archiving as well, as has been consistently
 explained in the client manuals. Products sometimes have warts like
 this due to the way they evolve from initial design objectives, as in
 Backup being the initial role of the primordial product.

 Richard Sims



--
[EMAIL PROTECTED]
http://www.fielding.ca

-
This mail sent through IMP: http://horde.org/imp/


Re: TSM 5.3 Administration Center

2005-02-09 Thread Paul Fielding
I think a number of people have summed up the issues here well, no need for
me to harp on them.  The one exception I would make is in regards to
creating an 'intuitive interface'.
So far, IMHO, the interface is far from intuitive.  While I can see the
theoretical logic behind burying the Schedules section underneath Policy
Domains, it took myself (with 7 years of TSM implementation and usage
experience) and my current client several minutes to find it.  There are
some places where clicking on an icon brings up the properties window for
the item, others where you must instead use the pull-down menu to get
properties, meanwhile clicking on the icon initiates another wizard of some
kind.  Many of the windows that are designed to explain what's happening
are needlessly cluttered and wordy.  If you're going to explain everything,
give us a prominent help button beside the options instead - It's a pain in
the butt to skim over whole paragraphs of text to try to find the two
options you're looking for, which have incidentally been named differently
than the traditional name for the option, further making it difficult to
find.
Then there are the functional decisions that have been made regarding
'hiding' configurations from the user.  For example, if the ISC has anything
to say about it, Policy Sets are a thing of the past.   I have yet to find a
reasonable way to view all policy sets, add a policy set or remove one.
Yet it is assumed that the server has a STANDARD policy set that
automatically gets validated and activated when updating an mgmt class or
copy group, with no opportunity for you to see the validation and *not*
activate.   *However*, when you add or remove mgmt classes (as opposed to
updating the copy groups within them), the policy set is NOT validated nor
activated.   Yet, the ISC returns a successful prompt, claiming that your
work here is done, when in fact the mgmtclass/copy groups you've just
added/removed are not in the Active policy set.There is a drop down menu
on the Policy Domain (why is it at the domain level?) for Activating the
Policy Set.  Not just any policy set, but THE policy set, which must be
named STANDARD otherwise it breaks.  For newbies, there's no indication of
why one would want to do this, given that policy sets are mentioned nowhere
else.  And it validates and Activates in one shot, without showing you the
output of the validation so you can see something is wrong.Frankly, this
is multiple bits of buggy behavior.  Make a choice - either make it work
correctly so the end-user doesn't need to know that policy sets exist, or
give us the chance to work with our policy sets.
One of the single biggest features I liked about the old Web Interface is
gone - the command line at the bottom.   For those of us who find some
benefits to using the GUI, and some benefits to using the CLI, this was a
great tool - the GUI was at our side, but the CLI was always *right* *there*
when we wanted it, and the output was cleanly put into the browser window in
a way we could scroll along easily.  The CLI in the ISC is in an annoying
place to get to, you can't easily keep it at your fingertips, and the output
it provides is formatted in an annoyingly short width that causes many
typical queries to go into long format making them harder to read at a
glance.  Please, if you do nothing else, find a way to get us the always
there CLI back and fix the output.  As it sits now, I have to open up the
old fashioned black CLI in another window and jump back and forth to run
commands.
There's a number of other small bugs that need to be worked out, I'm sure
they will be - any new product is bound to have growing pains.But I
think there's still a lot of work to do.
regards,
Paul
- Original Message -
From: Kathy Mitton [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, January 26, 2005 8:19 PM
Subject: [ADSM-L] TSM 5.3 Administration Center

We?ve noticed the recent discussion and concern in this forum surrounding
the new TSM 5.3 Administration Center. We are listening to you, and we
would like to respond to several of the points raised.
First, we would like to explain why we made these changes. The old
interface had not been changed in over 8 years and had fallen behind the
times. One of the top customer requirements was for an easy to use,
intuitive interface. A number of existing customers as well as customers
using competitive products were involved throughout the development
process.  They not only participated in defining what they were looking
for but also participated in early design reviews.
As a result of that work, the Administration Center was created.  Wizards
help guide you through common configuration tasks, and properties
notebooks allow you to modify settings and perform advanced management
tasks. The interface was integrated into the Integrated Solutions Console
because one key customer requirement is a ?single pane of glass? view of
all of the servers in their 

Anyone installed 5.3 yet?

2005-01-24 Thread Paul Fielding
I'm interested to know if anyone has tried installing 5.3 yet and what
they've found...
Paul


Re: TSM Client on HMC

2005-01-19 Thread Paul Fielding
Hey Miles,
I know of one customer who tried without much success.  My understanding is
that the HMC was too far locked down to be able to install and run the TSM
client on it.
Given that the HMC is supposed to be a closed system, there shouldn't be
anything stored on there that you need to keep, other than some basic
settings that can be backed up by the HMC's own processes on the odd
ocassion that you make such changes.   If the HMC blows up I'd probably just
restore it from CD or DVD...
regards,
Paul
- Original Message -
From: Miles Purdy [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, January 19, 2005 12:31 PM
Subject: [ADSM-L] TSM Client on HMC
Does any one out there have a p5 system with a HMC? Have you tried to hack
the HMC and put the TSM client on it?
I'd be interested in anyone's experiences.
Miles
--
Miles Purdy
System Manager
Information Systems Team (IST),
Farm Income Programs Directorate (FIPD),
Agriculture and Agri-Food Canada (AAFC)
6th Floor 200 Graham Ave.   Mailing: PO box 6100
Winnipeg, MB, CA R3C 4N3
R3C 4L5
Office contact:Mobile contact:
[EMAIL PROTECTED]
[EMAIL PROTECTED]
ph: (204) 984-1602 fax: (204) 983-7557Cell: (204) 291-8758
If you hold a UNIX shell up to your ear, can you hear the C?
-


Re: dbb dbsnapshots

2005-01-18 Thread Paul Fielding
I just realized a typo in my previous message.  the flag I was referring to
should be 'source=dbs', rather than 'type=dbs'.
I do use a DRM planfile - to have it understand your tape layout correctly
if sending snapshots offsite, you should run it with the 'prepare
source=dbs' flag, as well.  This should be run *after* tapes have been
removed from the library, so that the planfile it creates knows that the
tapes are offsite.The planfile can then be either copied to a floppy and
sent offsite with the tapes, or you could write a script or batch file to
email the planfile to an offsite email address
regards,
Paul
- Original Message -
From: Joni Moyer [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Tuesday, January 18, 2005 5:03 AM
Subject: Re: [ADSM-L] dbb  dbsnapshots

Thanks Paul!
One other question:  do you do a DRM Plan?  If so, how do you change it to
include dbsnapshots instead of dbbackups?  Thanks!

Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]


Paul Fielding
[EMAIL PROTECTED]
AW.CA To
Sent by: ADSM:   ADSM-L@VM.MARIST.EDU
Dist Stor  cc
Manager
[EMAIL PROTECTED] Subject
.EDU Re: dbb  dbsnapshots
01/17/2005 03:32
PM
Please respond to
ADSM: Dist Stor
Manager
[EMAIL PROTECTED]
  .EDU


I actually do the opposite - I send snapshots offsite and leave DBBs
onsite - the reason being that you can roll forward recovery logs on a dbb
but not on a dbs - if you lose your db due to non-disaster reasons, it is
quite possible that you may still have the recovery logs and can roll
forward.In the event of a disaster where you need to restore from
offsite, chances are higher that your entire site was destroyed and you
won't have logs to roll forward.
The only downside to this approach is that you *must* make sure you are
consistent about using type=dbs with all your 'q drmedia' and 'move
drmedia'
commands to ensure that you are handling the correct tapes for offsite
movement...
regards,
Paul
- Original Message -
From: Joni Moyer [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Monday, January 17, 2005 1:21 PM
Subject: [ADSM-L] dbb  dbsnapshots

Hello All!
I just thought I would ask what everyone's standards are concerning
database backups and database snapshots.  My thoughts on this are to keep
14 days worth of database backups  which will be sent offsite and to
allow
DRM to delete old dbb.  I was then thinking of taking dbsnapshots
once/day
for onsite database backup restore in case of an immediate onsite
emergency
and keeping 7 dbsnapshots and deleting them by running del volh
type=dbsnapshot todate=today-7.
Any discussions/suggestions on this topic are appreciated.  Thank you!

Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: dbb dbsnapshots

2005-01-17 Thread Paul Fielding
I actually do the opposite - I send snapshots offsite and leave DBBs
onsite - the reason being that you can roll forward recovery logs on a dbb
but not on a dbs - if you lose your db due to non-disaster reasons, it is
quite possible that you may still have the recovery logs and can roll
forward.In the event of a disaster where you need to restore from
offsite, chances are higher that your entire site was destroyed and you
won't have logs to roll forward.
The only downside to this approach is that you *must* make sure you are
consistent about using type=dbs with all your 'q drmedia' and 'move drmedia'
commands to ensure that you are handling the correct tapes for offsite
movement...
regards,
Paul
- Original Message -
From: Joni Moyer [EMAIL PROTECTED]
To: ADSM-L@VM.MARIST.EDU
Sent: Monday, January 17, 2005 1:21 PM
Subject: [ADSM-L] dbb  dbsnapshots

Hello All!
I just thought I would ask what everyone's standards are concerning
database backups and database snapshots.  My thoughts on this are to keep
14 days worth of database backups  which will be sent offsite and to allow
DRM to delete old dbb.  I was then thinking of taking dbsnapshots once/day
for onsite database backup restore in case of an immediate onsite
emergency
and keeping 7 dbsnapshots and deleting them by running del volh
type=dbsnapshot todate=today-7.
Any discussions/suggestions on this topic are appreciated.  Thank you!

Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: veritas backup exec client and TSM server

2004-03-15 Thread Paul Fielding
 ok, that assumes that veritas is less able to support their product
 than IBM is. Since Veritas is a _big_ player in the back-up market,
 I'm not ready to do so just yet. Remember, this is a product in us
 by at least as may organisations as TSM is, and it wouldn't be if
 people can't rely on the quality of the product.

I would be prepared to assume it.  I once tried to contact Veritas for
support when a client was trying to use Backup Exec to backup Exchange
databases to a TSM server.  We couldn't get it to talk to the server.  Not
even a peep.  Actlog indicated B.E. never got even as far as trying to talk
to TSM.

First, I got passed to *four* different technical guys before we hit one who
even knew what TSM was, let alone the fact that their product supports dumping
to TSM.   After he poured through the install manual (which we already told
him we'd done thoroughly) he solemnly declared that we needed to reinstall
our TSM *server*.  You can imagine our response...

We eventually got it to work without Veritas's help, but after using for 3
weeks they went back to using the Tivoli agent.

regards,

Paul









  -Original Message-
  From: Remco Post [mailto:[EMAIL PROTECTED]
  Sent: Monday, March 15, 2004 12:49 PM
  To: [EMAIL PROTECTED]
  Subject: Re: veritas backup exec client and TSM server
 
  But still, I have no answer to my experiences question, which is more
  important to me than IBM politics.
 
 
  Confidentiality Note: The information transmitted is intended only for
the
  person or entity to whom or which it is addressed and may contain
  confidential and/or privileged material. Any review, retransmission,
  dissemination or other use of this information by persons or entities
  other than the intended recipient is prohibited. If you receive this in
  error, please delete this material immediately.


 --
 Met vriendelijke groeten,

 Remco Post

 SARA - Reken- en Netwerkdiensten  http://www.sara.nl
 High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

 I really didn't foresee the Internet. But then, neither did the computer
 industry. Not that that tells us very much of course - the computer
industry
 didn't even foresee that the century was going to end. -- Douglas Adams



-
This mail sent through IMP: http://horde.org/imp/


Re: Strange inventory expiration problem

2004-03-12 Thread Paul Fielding
Not only that, but if I recall correctly the only files that'll get rebound
are the ones that get touched by the client as either a still active file
(along with it's inactives) or a 'just' deleted file, one that was
previously active but as of this incremental has been deleted and becomes
inactive.

I seem to recall a discussion once that files that have been inactive for
awhile won't get touched during the rebinding process because there's no
active file getting scanned and no deleted file to be marked inactive (that
process is already done).  So I *think* files that were deleted more than 1
day ago won't get rebound to the new mgmt class.

I'm not certain of this, if someone can affirm or tell me otherwise that'd
be appreciated...

regards,

Paul

- Original Message -
From: Dwight Cook [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, March 12, 2004 3:08 PM
Subject: Re: Strange inventory expiration problem







hard to say...
I'll speculate though...
now, after doing all the voodoo, did you push an incremental from each node
you desired to increase the retention of ???
there is the internal table Expiring.Objects and I ~think~ that as your
client runs its normal incremental and you see all of those expiring
blah... that is putting entries into the Expiring.Objects internal table
to assist in the expiration process
Backups prior to your environment modifications might have placed entries
into that table that are being processed during expiration.
Maybe if you rerun incrementals from all those nodes, it will properly
rebind all the files and cure that problem.

this is a LOT of guess work by me... don't have any logic manuals for TSM
available.

I would start by forcing the clients to connect to the tsm server and
perform fresh incremental processing to get the new management class
characteristics picked up and applied...

just a thought

Dwight E. Cook
Systems Management Integration Professional, Advanced
Integrated Storage Management
TSM Administration
(918) 925-8045




  Scott McCambly
  [EMAIL PROTECTED]To:
[EMAIL PROTECTED]
  AM.NET  cc:
  Sent by: ADSM:  Subject:  Strange inventory
expiration problem
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  03/12/2004 01:27
  PM
  Please respond to
  ADSM: Dist Stor
  Manager






Here's a good one for a Friday afternoon

We've had a requirement to temporarily suspend expiration of objects on
selected client nodes.

We copied the active policy set for the domain containing these nodes
and set all backup and archive copy group parameters to NOLIMIT, and
activated this new policy set.  I have verified that I see these new
values from the client side (q mgmt -detail).

The strange part is that now when we run expiration processing in
verbose mode, as the processing hits the filespaces for these nodes, it
is still reporting statistics of  many hundreds of backup objects being
deleted.  How is this possible?!?  We of course only let expiration
processing run for a minute before canceling it again.  A few sample
query backup -inactive on one of the nodes that was listed in the log
messages of expiration processing seem to show all the versions there as
expected.

I then tried to turn on tracing to see what files were being deleted,
but the trace messages generated by IMEXP and IMDEL only show the object
ID, and I assume once deleted that you couldn't retrieve the file name
from the database anyway.

Can anyone please explain this behavior, or possibly point me to more
trace flags that might help show what files are being deleted?

Thanks

Scott.


Re: CAD TSM errors - Win Client on W2003 Server CAD Error

2004-03-09 Thread Paul Fielding
I've seen this on other windows boxes before.   It happens to me sometimes
if I try to install the Scheduler Daemon via the Wizard, and tell it to use
CAD to start, but don't try to install the Web Client via the Wizard, and
then adding a 'managedservices schedule webclient' in the dsm.opt file.

The 'webclient' part of the managedservices line tells the Client Acceptor
Daemon to control the Web Client, but if the Web client hasn't been added
via the wizard, then the Remote Agent service won't have been installed and
the registry entry that CAD looks for to know which Agent service to start
won't exist.

The safest way to install the web client and scheduler is:

- install web client via wizard
- install scheduler service telling it to use CAD

If you do it in that order you shouldn't have problems.

To fix your current problem, try just uninstalling and reinstalling the Web
Client via the Setup Wizard.  If that doesn't work or the wizard won't let
you, try this:

- remove the Scheduler service
- remove the Web Client server
- remove any other TSM client services you may see
- install web client via wizard
- install scheduler service via wizard.

You may need to use dsmcutil to remove offending services.  Open a command
line and go to:
c:\program files\tivoli\tsm\baclient

run 'dsmcutil list'  to get a list of the services TSM knows about.

then do:

dsmcutil remove /name:Service Name in quotes with correct caps
/node:nodename tied to service

for each service.  That'll clean it up.  Then do the reinstall of the
services in the right order

good luck!

Paul


- Original Message -
From: Charlie Hurtubise [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, March 09, 2004 9:04 AM
Subject: CAD TSM errors - Win Client on W2003 Server CAD Error


 Nagy,

 I submitted this error help request in January, but got little response..
probably because no one experienced it yet. I'm a brand new TSM users so all
is new.. and our Win servers are all at 2003 now. One response came through
but it was something I had already tried. But I think being pioneers with
2003  the latest TSM, we are just getting arrows in the back (including you
now). IBM may help, but I haven't changed IBM Tivoli support to my name yet
from an ex employee. This only effects remote webaccess to do
backups/restores (via port 1581), but local or terminal services access on
the 2003 sever console work fine. It appears it may be just a registry fix
CadSchedName registry value is empty ?

 Page 17 in the Dec-03 5.2.2 manual, or page 14 in the older 5.2 manual
Configuring the Web Client lists the steps. I have to now turn the CAD
service off after or it tries to start it every 10 minutes.. see
dsmerror.log.

 Anyway, I just updated my Win2003 server to TSM Windows Client version
5.2.2.5 (connecting with a Linux 5.2.2.1 TSM server, but this doesn't
matter) and still the same error. It appears like this in the TSM
dsmerror.log...

 03/04/2004 14:59:58 Error starting schedule service: CadSchedName registry
value is empty
 03/04/2004 14:59:58 ANS1977E Dsmcad schedule invocation was unsuccessful -
will try again.
 03/04/2004 15:09:58 Error starting schedule service: CadSchedName registry
value is empty
 03/04/2004 15:09:58 ANS1977E Dsmcad schedule invocation was unsuccessful -
will try again.
 03/04/2004 15:19:58 Error starting schedule service: CadSchedName registry
value is empty
 03/04/2004 15:19:58 ANS1977E Dsmcad schedule invocation was unsuccessful -
will try again.

 Here's the original e-mail.

 Thanks
 Charlie Hurtubise
 Tecsys Inc.
 [EMAIL PROTECTED]

 -Original Message-

 From: Charlie Hurtubise

 Sent: Wednesday, January 21, 2004 1:38 PM

 To: '[EMAIL PROTECTED]'

 Subject: Win Client on W2003 Server CAD Error

 Importance: High

 Hello.

 Getting CAD Service start-up errors on a Win 2003 Server for Client data
Backup only. Anyone else have experience here or IBM?

 Details...

 I have installed TMS Client 5.2.2 on a new Win2003 server to backup non C:
disk data. All is fine except when I try to use the CAD daemon (service),
page 14 in the Tivoli 5.2 Users Guide for Windows Configuring the Web
Client. This is to access Tivoli on the W2003 server via http 1581 to do
restores. The W2003 GUI Tivoli NT console client works fine and the auto
backups work fine using the regular Win service with and without the
MANAGEDSERVICES setup in dsm.opt.

 I have tried the auto web access setup way using the console GUI client
(page 14) and the manual way (page 482).

 When trying the http 1581 access, all starts up well until you have to
login, then you receive on your browser screen

 ...ANS2619S The Client Acceptor Daemon was unable to start the Remote
Client Agent

 in the dsmerror.log as follows...

 01/12/2004 16:10:48 Error starting agent service: The service name, '', is
invalid.

 01/12/2004 16:10:48 Error starting Remote Client Agent.

 01/12/2004 16:10:59 Error starting agent service: The 

Re: Netware root administrator access denied?

2004-03-09 Thread Paul Fielding
That being the case any thoughts on why the root admin would be denied
access to the TSA via TSM assuming the correct account and pass are used?

paul

- Original Message -
From: David Longo [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, March 09, 2004 3:00 PM
Subject: Re: Netware root administrator access denied?


 I think most use the Netware root admin, we do.

 David Longo

  [EMAIL PROTECTED] 03/09/04 04:43PM 
 Anyone have any problems with giving Netware root admin privileges to TSM?
Using a 4.1.3 client on a 4.1 netware server, we get the following when
trying to use the admin account with full root privs:

 03/09/2004 13:42:54 ANS1874E Login denied to NetWare Target Service Agent
'[TREENAME1]'.
 03/09/2004 13:42:54 ANS1874E Login denied to NetWare Target Service Agent
'[TREENAME1]'.
 03/09/2004 13:48:21 ANS1874E Login denied to NetWare Target Service Agent
'[TREENAME2]'.
 03/09/2004 13:48:21 ANS1874E Login denied to NetWare Target Service Agent
'[TREENAME2]'.

 if we use an account that has separate privs to the trees, we can log in
fine, but cannot  backup the NDS and the web GUI doesn't show the NDS.

 we've confirmed that the admin account we're trying to use does indeed
still work and the password is correct.

 Thoughts?

 Paul

 ##
 This message is for the named person's use only.  It may
 contain confidential, proprietary, or legally privileged
 information.  No confidentiality or privilege is waived or
 lost by any mistransmission.  If you receive this message
 in error, please immediately delete it and all copies of it
 from your system, destroy any hard copies of it, and notify
 the sender.  You must not, directly or indirectly, use,
 disclose, distribute, print, or copy any part of this message
 if you are not the intended recipient.  Health First reserves
 the right to monitor all e-mail communications through its
 networks.  Any views or opinions expressed in this message
 are solely those of the individual sender, except (1) where
 the message states such views or opinions are on behalf of
 a particular entity;  and (2) the sender is authorized by
 the entity to give such views or opinions.
 ##



Re: Script to checkout tapes

2004-03-05 Thread Paul Fielding
You're opening a big can of worms doing it with only 1 drive.  That and you
need to deal with the fact that tape reclamation is a pain in the butt to deal
with tapes that aren't left in the library.   Are you planning on taking just
the exchange data out of the library, or both the exchange data *and* the flat
file data for the box?

I don't have a script for you, but I might try the following:

- Create a separate tapepool to send your exchange data to. This will make it
easier to get the tapes out of the library.

- send the exchange data to a mgtclass copy group that points to this
tapepool.  You may wish to use a separate policy domain to be absolutely sure
that all data goes to the right place. (Different people have differing
opinions on this).

- If you're just doing exchange db data, each tape will eventually expire all
it's data and you can check it back in as scratch.

- If you're also taking out the flat file data, I would do an archive of the
flat file data instead of a backup, send the archive copy group to this new
tapepool.  Set the copy group retention to expire about the same time as the
exchange data will roll off.  In this way the tape will become completely emtpy
at the right time, without the need for reclamation.

This is a bit simplified, but it might give you a direction to start looking.
There's several variations on a theme here.  If you're keeping the tapes on a
shelf and need to be able to mount them at a moment's notice, you might want to
look into the Move Media command (as opposed to move drmedia).  It will allow
you to remove tape sfrom the library yet keep them readily available for tape
mounts.

Good luck!

Paul

Quoting Kevin Godfrey [EMAIL PROTECTED]:

 I am going to install an Exchange server with a TSM client and backup
 all the data to our TSM server. Every day I would like the TSM server to
 checkout the volume/s that are used by the Exchange node (we are going
 to be taking a full backup each day). Does anyone out there have an
 example of a script that could help automate the checkout process? FYI
 our TSM library only has one drive and no copy storage pools.

 Thanks

 Kevin





-
This mail sent through IMP: http://horde.org/imp/


Re: Archive Domino mail forever

2004-03-04 Thread Paul Fielding
You can certainly do it - in the copy pool for the management class domino data
is going to set the following:

verexist = nolimit
verdeleted = nolimit
retextra = nolimit
retonly = nolimit

However, don't forget these other two resultant settings:

size of library = not big enough
$$$ for new tapes and library hardware = nolimit

regards,

Paul

Quoting Hector Chan [EMAIL PROTECTED]:

 Hi All,

 Is it possible to archive Domino database and mail forever using the TDP
 module?

 Thanks in advance.


 /hector





-
This mail sent through IMP: http://horde.org/imp/


Netware 4.1, Client 4.1.3 and Network Failure during backup?

2004-03-03 Thread Paul Fielding
Hi all,
Having dug though the archives I haven't found anything that looks quite like
this one, so I thought I'd throw it out there.

I'm currently implementing TSM at a site running several Netware 4.1 servers.
Understanding that it's no longer a supported platform, they still have a need
to back them up and appropriately we've installed the TSM 4.1.3 client as it's
the last client that reports supporting Netware 4.1 (anyone know otherwise?)

The Netware box is running the last available service pack for 4.1 before
support was discontinued.

Roughly two hours into the scheduled backup, it appears that all network
connectivity to the Netware server died.  The server was still running, but
could only be reached on local console.  Both IPX and TCP were knocked out.

On the TSM client side, the only messages were TCP communications failures (rc -
50), and on the Netware console side we saw the following:

NAV NLM WARNING Error
NAV NLM Logged out of NDS, Continuing in bindery mode
NAV NLM WARNING Error
NAV NLM Logged back into Directory Services

No idea if it's directly or indirectly related to our problem.

Networking came back when we rebooted the server.

Anyone have any ideas?

regards,

Paul

-
This mail sent through IMP: http://horde.org/imp/


Re: Fw: Netware 4.1, Client 4.1.3 and Network Failure during backup?

2004-03-03 Thread Paul Fielding
That's actually a good question.  The Netware admin had rebooted it by the time
I came in.   We'll run another test tonight and if it freezes again i'll make
sure we test the connectivity more thoroughly...

regards,

Paul

Quoting [EMAIL PROTECTED]:






 Can you ping the server when the server is no longer accessible? Does the
 server hold a copy (Master or R/W) of the replica? If the server is still
 on the wire but not accessible by a Novell Client, it could just be the NDS
 loggout..
 ___

 Roger Nadler
 Manager Plant Engineering/Network Technologies
 Office (856) 582-3212
 Fax (856) 256-2901


 Text Messaging:[EMAIL PROTECTED]

 Sony Disc Manufacturing
 400 North Woodbury Road
 Pitman, NJ 08071
 ___

  [EMAIL PROTECTED]
  ca
 To
  03/03/2004 03:37  Roger
  PMNadler/PT/Disc-US/[EMAIL PROTECTED]
 cc

Subject
Re: Fw: Netware 4.1, Client 4.1.3
and Network Failure during backup?










 Hi Roger,
 The prerequisite NLMs are at the same or higher versions than is listed in
 the
 readme.  There isn't anything that I know of auto logging out users, and
 nothing in the log seemed to indicate this.   It appears that the entire
 network stack(s?) got hammered, perhaps the getting logged out of the NDS
 is
 simply a symptom of this?.   The fact that the server is *completely*
 inaccessable via network via IPX or TCP until the reboot was done is quite
 disconcerting :)

 regards,

 Paul


 Quoting [EMAIL PROTECTED]:
  The Novell Client for 4.x requires the TSA410.NLM as well as the
 TSANDS.NLM
  (The versions required should be in the readme). These agent nlm's are
 what
  allows TSM to open the files for backup. It appears from your messgaes
 that
  the server connections to the NDS are being broken (logout). That is why
  your antivirus is also logged out. This will cause TSM to break it's
  connections. Is it possible that you are running somethin that will auto
  loggout users? The dsm.opt file specifies what nwuser to be logged in as.
  Look in your console log and see if that user is being logged out at the
  time the connection is broken..


 -
 This mail sent through IMP: http://horde.org/imp/






-
This mail sent through IMP: http://horde.org/imp/


Re: Fw: Netware 4.1, Client 4.1.3 and Network Failure during backup?

2004-03-03 Thread Paul Fielding
As it turns out, we couldn't ping the server when it became unaccessible.
There's a monitor server here that does regular ping tests on the site, and it
recorded the Netware server in question as going down roughly the same time our
backup died, and TCP didn't come back online until we rebooted the box in the
morning

regards,

Paul

Quoting [EMAIL PROTECTED]:






 Can you ping the server when the server is no longer accessible? Does the
 server hold a copy (Master or R/W) of the replica? If the server is still
 on the wire but not accessible by a Novell Client, it could just be the NDS
 loggout..
 ___

 Roger Nadler
 Manager Plant Engineering/Network Technologies
 Office (856) 582-3212
 Fax (856) 256-2901


 Text Messaging:[EMAIL PROTECTED]

 Sony Disc Manufacturing
 400 North Woodbury Road
 Pitman, NJ 08071
 ___

  [EMAIL PROTECTED]
  ca
 To
  03/03/2004 03:37  Roger
  PMNadler/PT/Disc-US/[EMAIL PROTECTED]
 cc

Subject
Re: Fw: Netware 4.1, Client 4.1.3
and Network Failure during backup?










 Hi Roger,
 The prerequisite NLMs are at the same or higher versions than is listed in
 the
 readme.  There isn't anything that I know of auto logging out users, and
 nothing in the log seemed to indicate this.   It appears that the entire
 network stack(s?) got hammered, perhaps the getting logged out of the NDS
 is
 simply a symptom of this?.   The fact that the server is *completely*
 inaccessable via network via IPX or TCP until the reboot was done is quite
 disconcerting :)

 regards,

 Paul


 Quoting [EMAIL PROTECTED]:
  The Novell Client for 4.x requires the TSA410.NLM as well as the
 TSANDS.NLM
  (The versions required should be in the readme). These agent nlm's are
 what
  allows TSM to open the files for backup. It appears from your messgaes
 that
  the server connections to the NDS are being broken (logout). That is why
  your antivirus is also logged out. This will cause TSM to break it's
  connections. Is it possible that you are running somethin that will auto
  loggout users? The dsm.opt file specifies what nwuser to be logged in as.
  Look in your console log and see if that user is being logged out at the
  time the connection is broken..


 -
 This mail sent through IMP: http://horde.org/imp/






-
This mail sent through IMP: http://horde.org/imp/


IPX under TSM 5.2?

2003-12-22 Thread Paul Fielding
I have a need to get a Novell 4.x box backing up to a new TSM implementation
under IPX/SPX.  My current understanding is that IPX is no longer supported, my
my two quick questions are such:

1. Does not supported mean will work but not supported, or does it
mean won't work ?

2. Assuming it won't work, anyone have any idea what order of magnitude is
involved in adding a TCP stack to Novell 4.x ?  Before I go recommending this
option I wouldn't mind having some idea of the viability of it...

regards,

Paul



-
This mail sent through IMP: http://horde.org/imp/


Stopping BRBACKUP from doing a logswitch with TDP R3?

2002-05-06 Thread Paul Fielding

Hi all,

I'm working with someone who's doing a BCV split on EMC disk, then using the TDP for 
R3 to back up the database (SAP oracle).  Leaving aside questions on whether or not 
the TDP should be used here (I've already had that discussion), we're running into a 
problem where we can restore the database no problem, but rolling forward logs to a 
further point in time is not working.

It looks like what is happening is that when BRBACKUP runs, it first brings up the 
database and does a logswitch, then brings down the database again and kicks in the 
backup.

The problem is, as soon as that logswitch happens, the log sequencing has now changed 
from the production database - restoring logs from the production database to do a 
roll-forward recovery no longer matches the database we've backed up.  It looks like 
we need to somehow prevent BRBACKUP or the TDP from bringing the database up, or at 
least from doing the logswitch.

Any thoughts?

regards,

Paul



TDP R3 keeping monthly and yearly for different retentions?

2002-04-29 Thread Paul Fielding

Hi all,

I did some poking around the list and didn't see anything on the subject.

Does anybody have a good method for doing Monthly and Yearly backups of an R3 (oracle) 
database using the TDP for R3? I have a requirement to maintain daily backups for 2 
weeks, monthly backups for 3 months and yearly backups for 7 years.   Superficially, 
It appears to be straightforward to set up different server stanzas within the TDP 
profile for different days of the week, but that's it.

I suspect that I could get extra fancy and write a script to do a flip of the profile 
to an alternate profile file on the appropriate days, and have it flip back when it's 
done, but that seems like a bit of a band-aid to me and I'm wondering if anyone's come 
up with something better?

regards,

Paul



Archive not grabbing all directories

2001-03-22 Thread Paul Fielding

I did some snooping and couldn't see anyone mention this one.   I've got a script 
running on one box (HPUX 11, 4.1.2 client) to run an archive:

dsmc archive -se=tsm_x1_apps -archmc=6month -subdir=yes\
 -desc="ADSM X1-FVO2 Online Backup"\
 "/etc/*.ora" \
 "/p/util/oracle/adm/FVO2/*"\
 "/db/x1_oravlB1/ORACLE/FVO2/*"\
 "/db/x1_oravlB2/ORACLE/FVO2/*"\
 "/db/x1_oravlB3/ORACLE/FVO2/*" \
 "/db/x1_oravlB4/ORACLE/FVO2/*" \
 "/db/x1_oravlB5/ORACLE/FVO2/dbf/*"\
 "/db/x1_oravlB5/ORACLE/FVO2/ctl/*"\
 
"/db/x1_oravlB5/ORACLE/FVO2/log/"/p/util/oracle/admin/dba/FVO2/backups/FVO2
_online.`date +\%y\%m\%d` 21 /dev/null


When the script runs, it goes as far as grabbing the oravlB1 directory and then stops. 
 No error message, nothing.  As if the script ended at that point.  It correctly dumps 
to the log file noted, so I know it sees the entire line.

We have a shwack of other boxes running scripts in this format that have no trouble.  
Any thoughs?

Paul



Can't schedule backup of network shares

2000-11-21 Thread Paul Fielding

I'm trying to back up a couple of network mounted drives.  In the past I've done this 
by simply adding the drives to the domain list.   This time, however, I'm having 
problems getting them to run, and I suspect that since I am using the NT 4.1.1.0 
client this time that it is either a) a change in the config I need to make, or b) a 
bug in the code.  In my dsm.opt file i have tried the domain statement each of these 
two ways:

(N: and O: are the network shares)

DOMAIN "C:"
DOMAIN "D:"
DOMAIN "E:"
DOMAIN "N:"
DOMAIN "O:"

and

DOMAIN C: D: E: N: O:

In both cases, this is what I see in the dsmsched.log:

Executing scheduled command now.
11/21/2000 10:20:35 --- SCHEDULEREC OBJECT BEGIN TEST 11/21/2000 10:20:00
11/21/2000 10:20:39 Incremental backup of volume '\\RTSM\C$'
11/21/2000 10:20:39 Incremental backup of volume '\\RTSM\D$'
11/21/2000 10:20:39 Incremental backup of volume '\\RTSM\E$'
11/21/2000 10:20:39 Incremental backup of volume 'N:'
11/21/2000 10:20:39 Incremental backup of volume 'O:'

However, neither the N: nor the O: drive are touched.

Any ideas?

Paul



How to turn off Application Log msgs on NT TSM Server?

2000-11-21 Thread Paul Fielding

Hi all,

I'll be darned if I can find this.

Running a TSM 4.1 server, I want to disable the message output to the NT Application 
Event log.

Anyone know where to do it?

Paul



Bare Metal of Domino?

2000-11-01 Thread Paul Fielding

Ok, so here's a question based on my lack of knowledge of Domino.  Going through the 
Domino Red Book, during the section on rebuilding a Domino server from scratch, I get 
to the line that says -

"Re-install and configure the Domino Server".

The next steps are to set up the Domino client and basically start restoring.

This seems a bit thin for a Red Book.

My question is  - How much 'configuration' needs to be done before you can start 
restoring Domino databases?  Obviously, I'd like to configure as little as is 
necessary to be able to restore everything...

Thanks in advance...

Paul



AIT-2 getting 25, not 50 GB

2000-10-27 Thread Paul Fielding

Hi All.  I've got a set of AIT-2 drives that I'm setting up, using 50 GB native tapes. 
 As my first tapes have started becoming full, I'm finding I'm only getting just over 
25 GB on the tapes.

Based on what I've read on the list, it looks like people have found the estimated 
capacity to be 20 GB, but that when the tape filled they still got the capacity they 
should be getting.  I'm, however, not seeing this.

I've got the devclass set to type 8mm, format AITC.  Now, here's a possible hitch.  
Initially, I labeled a whack of tapes before I realized that the format was still set 
to DRIVE as opposed to AITC.  Is it possible that labeling the tapes without the AITC 
format specified is causing the problem?  I suspect not, but ya never know... shrug  
If it is the problem, anyone know a quick'n dirty way to wipe the tapes clean enough 
(on NT) to allow a clean fresh labelling?

Thanks muchly

Paul



Backing up db2/nt?

2000-10-26 Thread Paul Fielding

I've been digging all night through db2 manuals and red books, trying to find 
something that would reasonably describe a straightforward way to backup and restore a 
db2 5 or 6 database on NT.

The Databases Redbook is 2 years old and concentrates on AIX and OS/2, aside from 
being a read something akin to C code.

The NT recovery book provides a 1000 foot level view that doesn't tell you anything 
useful.

And I'll be darned if I can find anything in the DB2 manuals.

Does anyone have a short, concise, to the point, working description of how to 
successfully do this?  I'll buy you a case of virtual beer if you'd be willing to 
share... :)

Paul

P.S. Replying to the group is fine, but if not replying to the group, please use 
[EMAIL PROTECTED] . for responses
thanks.



reclamation issue and an ANR9999D

2000-10-18 Thread Paul Fielding

Here's one.  I've got a server that dies during reclamation on a copy pool tape with 
the following error:

ANRD aferase.c(528): Invalid logSizeRatio 
4048033.806786 (logSize=0.372, size=0.1061,  
aggrSize=0.4860) for aggregate 0.9306677

Based on what I've read around here I did a bit of research on the old reclamation 
issues and the Audit Reclaim, etc. tools, since it looked like that was the problem 
here.

However, Audit Reclaim says that it 'isn't required' and doesn't run.

Simply ditching the tape would be perfectly acceptable (since it's just a copy pool 
tape), but any attempt to delete the volume with a discarddata=yes just returns the 
same error.

It's an AIX server currently at 3.1.2.15.  Yes, it most certainly is problem code and 
needs to be updated, but I'm a little bit concerned about attempting to update the 
code before fixing the issue. Should I be?

Anyone have any ideas?

regards,

Paul



3590e drives - What's your best *realistic* throughput?

2000-10-12 Thread Paul Fielding

I'm looking to know what I could *realistically* expect from a 3590e drive
during a *restore*, assuming the tape drive is the bottleneck, ie. disk is
fast enough, bus is wide enough, everything is local attached, machine is
super zippy, and restoring BIG files.

Certainly the theoretical max is something along the lines of 14 MB/s, but
I'm wondering how close people really come to that?

thanks in advance...

Paul