Re: SNMP on Windows2003

2004-07-06 Thread Mr. Lindsay Morris
Well, you *can* use Servergraph to send SNMP traps to TEC or OpenView, etc.
It's nice because your TEC people don't have to figure out which of the
3,000
 error messages they need to handle - Servergraph only sends 220 or so
different
messages, with suggestions as to how to handle each one.

Hope this helps.

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6137 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Brian Ipsen
 Sent: Tuesday, July 06, 2004 2:22 AM
 To: [EMAIL PROTECTED]
 Subject: Re: SNMP on Windows2003


 Hi,

  Forget SNMP support on a Windows based TSM server. Some time
 ago, when a major security-flaw was discovered in almost all SNMP
 implementations, it seemed like IBM decided not to update the
 required component for Windows - together with a statement, that
 SNMP v1 is insecure. I don't know why they haven't implemented
 SNMP v3 or similar, but I've had quite a support-struggle with my
 supplier, because the manual and also the course material (for
 5.1) says SNMP is supported - they just forget to write, that it
 isn't possible on a windows based server.

 Regards,

 Brian


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Bill Boyer
 Sent: 6. juli 2004 01:09
 To: [EMAIL PROTECTED]
 Subject: SNMP on Windows2003


 Anyone out there (succesfully) set up a TSM 5.2.2.5 server on Windows2003
 with SNMP to talk to an HP OpenView server?? The documentation it somewhat
 vague on how to accomplish this.

 Bill Boyer
 An Optimist is just a pessimist with no job experience.  - Scott Adams

 ---
 This mail was scanned for virus and spam by Progressive IT A/S.
 ---

 ---
 This mail was scanned for virus and spam by Progressive IT A/S.
 ---



Re: TSM DB growing, but number of files remains the same ...

2004-05-04 Thread Mr. Lindsay Morris
I heartily agree with Richard's note about using the ACCOUNTING (not the
activity) log.

We tried several things with Servergraph, and found that:
   -- activity log messages are sometimes erroneous
   -- summary table data is sometimes unreliable
   -- frequent queries against these tables can impact the TSM server's
performance.

Servergraph uses the accounting log extensively for these reasons.  Rock
solid data there,
because it's written by the TSM server, not by the client software.

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6137 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Ted Byrne
 Sent: Tuesday, May 04, 2004 9:16 AM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM DB growing, but number of files remains the same ...


 Arnaud,


 I believe what I have to do now is to build some solid queries to explore
 our activity
 log...

 Unless I'm mistaken, I believe that you may have misread Richard's
 advice.  (Correct me if I'm wrong, Richard.) His recommendation:

 I stress accounting log reporting because it is the very best,
 detailed handle
 on what your clients are throwing at the server

 Accounting log records are comma-delimited records written to a flat file
 called dsmaccnt.log, assuming that you have accounting turned on.  If you
 don't, I would recommend that you turn it on now...

 The accounting log records give a very detailed,
 session-by-session picture
 of the activity between the server and clients. It will be easier to parse
 and process than what you can get out of the activity log.

 Ted



Re: Querying only errors?

2004-03-17 Thread Mr. Lindsay Morris
We (Servergraph) have a completely different trick:
we filter out the NORMAL messages, and display everything ELSE as a possible
error (and we boil them down so the volume is not overwhelming).

This is a lot smarter (IMHO), because, for example, when a TSM admin
fat-fingers a command, THAT gets an ANRxxxE message - an error - but it's
NOT really an error that matters.

OTOH, when a tape drive dies, THAT's and ANRW message - just a
warning, but most of us would prefer to treat it as an error.

So you might try developing something like that.

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Karel Bos
 Sent: Wednesday, March 17, 2004 9:22 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Querying only errors?


 q act s=anre for errors and w for warnings.

 Regards,

 Karel

 -Oorspronkelijk bericht-
 Van: Mike Bantz [mailto:[EMAIL PROTECTED]
 Verzonden: woensdag 17 maart 2004 15:20
 Aan: [EMAIL PROTECTED]
 Onderwerp: Querying only errors?


 Does anyone know how I'd filter out the activity log to only give
 me errors?
 I have a drive that keeps taking itself offline in the middle of the night
 (when I really don't want to be here) but I can't seem to dig out
 that time
 from the activity log. All I know is I'm coming in, taking a gander at the
 3583 and seeing a Drive or Media Error message on the display,
 hoping that
 TSM is reflecting this as well.

 TIA,

 Mike Bantz
 Systems Administrator
 Research Systems, Inc



Re: Anyone using Servergraph?

2004-02-24 Thread Mr. Lindsay Morris
(Trying not to slide into touting products here ...)

Smaller customers seem to like tsmmanager, perhaps because it's cheaper;
windows-only customers also liked it because Servergraph/Windows wasn't
available until recently.

Larger sites with tougher problems and more demanding requirements have
looked at both, and universally like Servergraph better.  Our sales group
can offer point-by-point comparisons (offline please).

Also, Bill Mansfield hosted a birds-of-a-feather session on TSM reporting
products at Oxford last year - he may have a write-up on that posted
somewhere.



-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Karel Bos
 Sent: Monday, February 23, 2004 2:51 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Anyone using Servergraph?


 Just compaire Servergraph with TSMmanager (www.tsmmanager.com)
 before buying
 Servergraph. We did and went for TSMmanager.

 Regard,

 Karel

 -Oorspronkelijk bericht-
 Van: Nancy Reeves [mailto:[EMAIL PROTECTED]
 Verzonden: donderdag 19 februari 2004 18:05
 Aan: [EMAIL PROTECTED]
 Onderwerp: Anyone using Servergraph?


 I have Servergraph/TSM on a trial right now and would like first hand
 opinions from other sites using it.

 Please reply privately.

 Nancy Reeves
 Technical Support, Wichita State University
 [EMAIL PROTECTED]  316-978-3860



Re: Anyone using Servergraph?

2004-02-20 Thread Mr. Lindsay Morris
Our customers generally give us VERY high marks for support!
But our customer base has been growing very fast.
We recently installed an automated help-desk application,
But is didn't work well at first; it's fixed now.

If we knew who rh was, we'd be glad to resolve any problems.
Our Windows build has been available for a few months now;
perhaps it was not ready when he/she called.

Growing pains ... but we always appreciate customer feedback.

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

--- rh [EMAIL PROTECTED] wrote:
 I found Servergraph to be unresponsive to my questions
 when I was investigating their products. I've asked
 multiple times about information on their lite and
 their Windows offerings without any response.


--- Pearson, Dave DCPearson AT SNOPUD DOT COM wrote:
 We use it too.. Very helpful product and very
 supportive support too...

 ~David Pearson~


 -Original Message-
 From: Crespo, Pedro A.
 [mailto:Pedro.Crespo AT INTEGRIS-HEALTH DOT COM]
 Sent: Thursday, February 19, 2004 9:17 AM
 To: ADSM-L AT VM.MARIST DOT EDU
 Subject: Re: Anyone using Servergraph?

 Great product and awesome support - as you can tell,
 we have had a very
 positive experience with both.


 Pedro A. Crespo
 Technology Architect
 Integris Health - IT
  N. Grand, Ste. 100
 Oklahoma City, OK 73112

 Pedro.Crespo AT Integris-Health DOT com
 405-951-9800 -fax
 405-949-6996 -voice

 -Original Message-
 From: ADSM: Dist Stor Manager
 [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
 Nancy Reeves
 Sent: Thursday, February 19, 2004 11:05 AM
 To: ADSM-L AT VM.MARIST DOT EDU
 Subject: Anyone using Servergraph?

 I have Servergraph/TSM on a trial right now and
 would like first hand
 opinions from other sites using it.

 Please reply privately.

 Nancy Reeves
 Technical Support, Wichita State University
 Nancy.Reeves AT wichita DOT edu  316-978-3860


Re: Upgraded to TSM 5.2.1.2

2003-12-10 Thread Mr. Lindsay Morris
Amen to that!

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Robin Sharpe
 Sent: Wednesday, December 10, 2003 11:14 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Upgraded to TSM 5.2.1.2


 This is a problem that keeps getting fixed and then coming back  in
 almost every release!

 IMHO, the real problem is the way the STATUS field is formatted in the
 PROCESSES table.  It's formatted to be readable by humans.  It's very
 difficult to parse in scripts because it contains new lines to make it
 look nice on screen.  Some of the information in STATUS is not available
 anywhere else (Like current input and output volume, waiting for mount of
 whatever volume, etc.), and some is redundant (files processed and bytes
 processed)... however I have found that the files and bytes in the STATUS
 field gets updated before (maybe more frequently than) the FILES_PROCESSED
 and BYTES_PROCESSED fields.

 I wish all of the info in the STATUS field could be placed in individual
 fields in the PROCESSES table, and fix this problem once and for all.  It
 would save us script writers lots of rework time for each new release!

 Robin Sharpe
 Berlex Labs


 |-+---
 | |   Ben Bullock |
 | |   [EMAIL PROTECTED]|
 | |   .COM   |
 | |   Sent by: ADSM: |
 | |   Dist Stor   |
 | |   Manager|
 | |   [EMAIL PROTECTED]|
 | |   T.EDU  |
 | |   |
 | |   |
 | |   12/10/03 10:10  |
 | |   AM  |
 | |   Please respond  |
 | |   to ADSM: Dist  |
 | |   Stor Manager   |
 | |   |
 |-+---

 -
 |
   |

 |
   |

 |
   |To: [EMAIL PROTECTED]

 |
   |cc:

 |
   |Subject:

 |
   |Re: Upgraded to TSM 5.2.1.2

 |

 -
 |



 Yes, we upgraded to 5.2.1.3 on AIX and have the odd formatting
 in the q proc output. No fix that I know of, luckily it is just
 cosmetic as far as I can tell.

 Ben

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Kamp, Bruce
 Sent: Wednesday, December 10, 2003 5:58 AM
 To: [EMAIL PROTECTED]
 Subject: Upgraded to TSM 5.2.1.2


 I just upgraded to TSM 5.2.1.2 from 5.1.1.6 on AIX 5.1ML5 64bit. One
 thing I noticed is that when I do a Q PR it is not formated correctly
 on the screen.  Has anybody else seen  this  is there a fix for it?
 Also is there anything else I need to watch out for?

 Thanks,
 -
 Bruce Kamp
 Midrange Systems Analyst II
 Memorial Healthcare System
 E-mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 Phone: (954) 987-2020 x4597
 Pager: (954) 286-9441
 Alphapage:  [EMAIL PROTECTED]
 Fax: (954) 985-1404
 -



Re: last backup info

2003-12-01 Thread Mr. Lindsay Morris
We have found that many TDP agents, especially older ones, don't bother
loading last-backup-date info into the TSM database when they complete.  I
think I've also seen this with some Mac clients.   But almost all the
filesystem clients DO load these fields correctly.

Another reason they're empty is because the schedule included something in
the objects field.  If you do an incremental backup, but limit it using the
object= feature on the schedule, dsmc does not load these fields, presumably
because it can't be sure that things were fully backed up.

We have workarounds within Servergraph to get accurate backup status
reporting - contact me offline if that's your concern.

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Gill, Geoffrey L.
 Sent: Monday, December 01, 2003 10:55 AM
 To: [EMAIL PROTECTED]
 Subject: last backup info


 Can anyone explain why the last backup start date/time, last backup
 completion date/time, and deletion occurred in filespace date/time would
 have no information in them for filespaces reported on some nodes? The
 capacity and percent util register data.

 Yesterday we had a SAN controller firmware upgrade and it seems one of the
 nodes on the SAN lost the data and a recovery is necessary. I'm
 not the one
 restoring the data nor do I have access to the node. I'm just looking from
 the TSM server side to see what is hopefully in TSM. I don't
 want to say I
 have a bad feeling something hasn't been backed up but I do want
 to have an
 educated answer if in fact there is no data.



 I understand the inclexcl list tells TSM what to back up, and I have no
 control of it. I have not asked to see this one yet but have a feeling I
 might need to. This particular node reports it is at 5.1.5.0, it
 is on Tru64
 platform. The server is on AIX 4.3.3, TSM 5.1.6.3.



 I see this same phenomenon for other filespaces on other nodes,
 although it
 has never struck me to ask why it is blank.

 Thanks for the help,




 Node Name

 CP-ITS-DWPROD


 Filespace Name

 /san2


 FSID

 49


 Filespace Type

 ADVFS


 Capacity (MB)

 416687.2


 Pct Util

 1.3


 Last Backup Start Date/Time

 -


 Last Backup Completion Date/Time

 -


 Deletion occurred in Filespace Date/Time

 -


 Is Filespace Unicode?

 NO


 Hexadecimal Filespace Name

 -



 Geoff Gill
 TSM Administrator
 NT Systems Support Engineer
 SAIC
 E-Mail:   [EMAIL PROTECTED]
 Phone:  (858) 826-4062
 Pager:   (877) 905-7154



Re: [tsm] Summarizing use by unix group?

2003-11-10 Thread Mr. Lindsay Morris
Our chargeback logic goes down to the filespace level;
most customers we've spoken to want to have chargeback by application,
rather than by unix group.
They can say application X is made up of filespaces A,B, and C, so that's
how we've been working it.
Would this work for you?

Also be aware that TSRM, and most SRM products, show you only what's on the
client's local disk; they don't show you what's in TSM storage.
Also, the idea of using unix groups for chargeback would break when you're
backing up windows boxes, right?

Appreciate your thoughts, Patrick (and everyone).
-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Patrick Audley
 Sent: Sunday, November 09, 2003 2:49 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [tsm] Summarizing use by unix group?


 Lindsay Are you looking at doing chargeback by unix groups?

 Yes, that's the primary reason.  We can certainly do it via owner
 but it's not as convenient.  We have TSRM installed but it seems not
 to be able to see our GPFS partitions though I haven't played that
 much with it yet.

 --
 Patrick Audley  [EMAIL PROTECTED]
 High Performance Computing Manager
 Computational Biology  Bioinformatics  http://www.compbio.dundee.ac.uk
 University of Dundeehttp://blackcat.ca
 Dundee, Scotland+44 1382 348721



Re: Summarizing use by unix group?

2003-11-09 Thread Mr. Lindsay Morris
I think not, at least not without expensive queries against the contents
table.
Still, if this is only a monthly snapshot, that might be doable.

Are you looking at doing chargeback by unix groups?

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Patrick Audley
 Sent: Sunday, November 09, 2003 11:41 AM
 To: [EMAIL PROTECTED]
 Subject: Summarizing use by unix group?


  Does anyone know a query to do this?  I can do it by owner but it's
 really the groups that I'm interested in.  I'd like to be able to get
 a list of MB in storage pools used broken down by storage pool and
 group.   Is this possible?

 Thanks,
 Patrick.

 --
 Patrick Audley  [EMAIL PROTECTED]
 High Performance Computing Manager
 Computational Biology  Bioinformatics  http://www.compbio.dundee.ac.uk
 University of Dundeehttp://blackcat.ca
 Dundee, Scotland+44 1382 348721



[no subject]

2003-11-07 Thread Mr. Lindsay Morris
Winchester IO is really any block-device IO: disk or tape.
So it's not abnormal to see that;  but it does look pretty high.
Are you getting the performance you expect?
Does vmstat say you're paging at these times?

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Juan Manuel Lopez Azanon
 Sent: Friday, November 07, 2003 4:53 AM
 To: [EMAIL PROTECTED]
 Subject:


 Hi all.

 I need your help for this.
 We have a HPUX 11.0 and TSM Server 5.1.5 running with an IBM
 library 3584 connected to server with IBM switch, fibre channel and
 tachyon cards.
 Every time the Tsm server make the disk migration or stgp
 reclamation, hpux server is full wio:

 10:44:31%usr%sys%wio   %idle
 10:44:32  14   1  85   0
 10:44:33  12   4  84   0
 10:44:34  16   2  82   0
 10:44:35   7   0  93   0
 10:44:36  18   3  79   0
 10:44:37   8   4  88   0
 10:44:38   3   0  97   0
 10:44:39  10   1  89   0
 10:44:40  25   2  73   0
 10:44:41  23   0  77   0

 Is it normal ?.



Re: Perl TSM daily reporting script.

2003-11-06 Thread Mr. Lindsay Morris
Nice job!
Hope you aren't planning to add real-time alerts,
trending/predictions, backup status without q events,
graphical display of server info, and stuff like that...  ;-}

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Patrick Audley
 Sent: Thursday, November 06, 2003 8:14 AM
 To: [EMAIL PROTECTED]
 Subject: Perl TSM daily reporting script.


 I'm designing a TSM daily reporting script in perl and would like some
 feedback from anyone who has similar scripts or would like to use
 one.  I've put up a sample of the report that it generates at:

 http://blackcat.ca/tsm_report.txt

 If you have any suggestions, comments, or flames please pass them on.
 The script is public domain so if you'd like a copy, just drop me an
 email.

 --
 Patrick Audley  [EMAIL PROTECTED]
 High Performance Computing Manager
 Computational Biology  Bioinformatics  http://www.compbio.dundee.ac.uk
 University of Dundeehttp://blackcat.ca
 Dundee, Scotland+44 1382 348721



Re: Query Event for Previous day shows status In Progress

2003-11-05 Thread Mr. Lindsay Morris
Am I off base in suggesting that query events is not a 
reliable way to check backup status?
We've found that you can have:

--events that say Missed, but you DO have a good backup
   (it started late, or was run manually, etc, but it WAS done)

--events that say Completed, but you DON'T have a good backup
   (it Completed the kickoff of a TDP agent; 
the agent failed an hour later)

--NO events at all for some nodes
   (your DBA may want to control the timing with crontab, 
rather than the TSM scheduler do it)

--skipped files, and events doesn't help you there.
   (outlook.pst, for example, usually stays open/locked, so 
TSM misses it. The event status is Completed, because it's 
only one file...!!)

This is old news to some people, but maybe useful for others.
-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 James Choate
 Sent: Wednesday, November 05, 2003 8:23 AM
 To: [EMAIL PROTECTED]
 Subject: Query Event for Previous day shows status In Progress
 
 
 Can someone tell me what is going on with a client(s) that has a 
 status of In Progress when I query the event?  When I query the 
 event for the previous day, the status indicates that the event 
 is in progress.  When I query the event for the current day, the 
 event status shows future.  When I query the session and the 
 process on the tsm server, I do not see anything going on with 
 respect to this event.  I have attached the output.  Any 
 help/guidance is appreciated. 
  
 
 tsm: TSMq event nt_servers nt_backups begint=00:00 begind=11/03/2003
  
 Scheduled Start  Actual Start Schedule Name   
   Node Name Status   
   -   
   - -
 11/03/03   22:30:00  11/03/03   22:45:00  NT_BACKUPS  
   IBS   In Progr-
   
  ess 
 11/03/03   22:30:00  11/04/03   00:18:12  NT_BACKUPS  
   FS_VSICompleted
 11/03/03   22:30:00  11/03/03   22:30:41  NT_BACKUPS  
   PLUS_2000 In Progr-
   
  ess 
 11/03/03   22:30:00  11/04/03   00:15:34  NT_BACKUPS  
   ID_CARD   Completed
 11/03/03   22:30:00  11/04/03   00:16:36  NT_BACKUPS  
   FS_DEVII  Completed
 11/03/03   22:30:00  11/04/03   00:15:41  NT_BACKUPS  
   POWERFAIDSCompleted
 11/03/03   22:30:00  11/04/03   00:15:44  NT_BACKUPS  
   TPG   Completed
 11/03/03   22:30:00  11/04/03   00:17:13  NT_BACKUPS  
   FS_VSJCompleted
 11/03/03   22:30:00  11/04/03   00:16:42  NT_BACKUPS  
   FS-HEAT   Completed
 11/03/03   22:30:00  11/03/03   22:30:33  NT_BACKUPS  
   FS-LIBIn Progr-
   
  ess 
 11/03/03   22:30:00  11/04/03   00:07:09  NT_BACKUPS  
   TNVOICE   Completed
 11/03/03   22:30:00  11/04/03   00:16:34  NT_BACKUPS  
   FS_ACTCompleted
 11/03/03   22:30:00  11/03/03   23:16:54  NT_BACKUPS  
   MERLIN2   In Progr-
   
  ess 
 11/03/03   22:30:00  11/03/03   22:40:38  NT_BACKUPS  
   MERCYWEBSRV   In Progr-
   
  ess 
 11/03/03   22:30:00  11/04/03   00:28:27  NT_BACKUPS  
   W2KDC Completed
 11/03/03   22:30:00  11/04/03   00:15:36  NT_BACKUPS  
   FS-XTDCompleted
 tsm: TSMq sess
   Sess Comm.  Sess Wait   Bytes   Bytes   
   Sess  Platform Client Name 
 Number Method StateTimeSent   Recvd   
   Type   
 -- -- -- -- --- ---   
   -  
  1 ShMem  Run  0 S  112.8 K 139   
   Admin AIX  ADMIN   
 17,411 ShMem  Run  0 S   45.9 M 139   
   Admin AIX  ADMIN   
 19,029 ShMem  Run  0 S1.0 M   1,018   
   Admin AIX  UHO
  
 
 tsm: TSMq pr   
  
  Process Process Description  Status  
  
   Number  
   
 -
2,917 Space Reclamation

Re: TSMManager vs. TSM Operational Reporting

2003-06-30 Thread Mr. Lindsay Morris
The Operational Reporting beta-testers (of which we are one)
work under a confidentiality agreement, so detailed comparisons
might not be available.  But to us, the two products look pretty similar.

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Gray, Alastair
 Sent: Monday, June 30, 2003 9:48 AM
 To: [EMAIL PROTECTED]
 Subject: TSMManager vs. TSM Operational Reporting


 Has anyone out there done a comparison of TSMManager against TSM
 Operational
 Reporting (still in Beta), from which they might be willing to
 share some of
 the results?

 Question prompted by short time scales and a steep learning curve.

 Alastair Gray
 Systems Type



Re: A query and some code...

2003-06-26 Thread Mr. Lindsay Morris
Allen, 2 points:
1. reclamation can busy out your tape drives in a very artificial way.
That is, if you set the reclamation threshold very aggressively, the library
will copy tape-to-tape continually trying to squeeze out wasted space.

2. If mount retention is set to 60 minutes (the default), then every mount
looks up to an hour longer than it really is.

But assuming everybody uses the standard recl treshhold of 60/40, and has
reset mount retentions, then we have dozens of samples from customer showing
average percent mount rate.
Without going back through them, 65% is on the high end, 15% the low end.

If there's enough interest, I'll go count

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 [EMAIL PROTECTED]
 Sent: Thursday, June 26, 2003 1:40 PM
 To: [EMAIL PROTECTED]
 Subject: A query and some code...


 I'm trying to collect data on what average utilization rates are on tape
 devices.  I'm attempting to support or erode the case We need more :)

 One thing that would be an interesting number (I decided) was the percent
 utilization of tape mount points.  So I whipped out my PERL, and...

 Submitted for your approval (and hopefully execution!) please
 find enclosed
 mountpct, a script that takes the output of a 'q actlog', and
 makes a rough
 calculation of the tape mount time available and used.

 I start counting with the first recorded mount, and stop at the
 last recorded
 unmount.

 Anyone who's willing, please consider running this against a
 sizable chunk of
 q actlog with tab formatting.  I collect the actlogs every day
 and ferret
 them away, so I've got gobs of them, but you can do something like


 dsmadmc -id=[something]  \
 -password=[something]  \
 -tab  \
 q actlog begint=00:00 begind=-14 days \
 | ./mountpct


 here's mine...  (I'm only retaining 10 days at the moment)

 DRIVE_A: 12351.100 minutes.
 DRIVE_B: 11503.333 minutes.
 DRIVE_C: 11421.667 minutes.
 DRIVE_D: 11531.550 minutes.
  Earliest mount occurred : Mon Jun 16  9:00:12 EDT 2003
  Last unmount occurred : Thu Jun 26 14:19:48 EDT 2003
  Actlog covers 14719.600 minutes.
  With 4 devices, 58878.4 device-minutes available.
  of which 46807.65 used. (79.499 %)




 - Allen S. Rout






 #!/usr/local/bin/perl -- -*-Perl-*-

 use Time::ParseDate;
 use Time::CTime;

 my $verbose = 1;
 my $spans = {};
 my $working = {};
 my $firstmount  = 60;
 my $lastunmount = 0;

 while ()
   {
 #print;
 next unless /ANR8468I|ANR8337I/;

 my @f = split(/\s+/);
 my $t = parsedate($f[0] $f[1]);
 my $to = ctime($t);

 if ($f[2] eq ANR8337I)
   {# Mount req.
 $drive = $f[9];
 if ($working-{$drive})
   { warn dual mount recorded?\n; }
 else
   {
 $working-{$drive} = $t;
 $firstmount = $t if ( $t  $firstmount ) ;
   }
   }
 elsif ( $f[2] eq ANR8468I)
   {# Dismount
 $drive = $f[9];
 if ($working-{$drive})
   {
 my $begin = $working-{$drive};
 my $end = $t;
 $lastunmount = $t if ($t  $lastunmount);
 my $run = $end - $begin;
 $spans-{$drive} = 0 unless defined ($spans-{$drive});
 $spans-{$drive} += $run;
 delete $working-{$drive}
   }
 else
   { warn dual (or initial) dismount recorded?\n;  }

   }
 else
   {
 #Huh?
 die How'd we get here?\n;
   }
   };


 my $count = 0;
 my $tmin = 0;

 foreach $s ( sort keys %$spans )
   {
 $count++;
 my $c = $spans-{$s};
 my $min = sprintf(%.3f,$c / 60);

 $tmin += $min;
 print $s: $min minutes.\n;

   }

 print  Earliest mount occurred : ,ctime($firstmount);
 print  Last unmount occurred : ,ctime($lastunmount);

 my $range;

 $range = $lastunmount - $firstmount;
 $range /= 60;
 $range = sprintf(%.3f,$range);

 my $pct;

 $pct = sprintf(%.3f,$tmin/($range * $count) * 100);


 print  Actlog covers $range minutes.\n;
 print  With $count devices, ,$range * $count, device-minutes
 available.\n;
 print  of which $tmin used. ($pct %)\n;


 sub pm
   {
 my $n = shift;
   }



Re: Verify lan-free transfer???

2003-06-24 Thread Mr. Lindsay Morris
Look in the accounting logs.
If it went LAN-free, there'll be a record for that session in the
dsmaccnt.log file on the CLIENT;
If it went over the LAN, there'll be a record for that session in the
dsmaccnt.log file on the SERVER.
These files usually live in /usr/tivoli/tsm/server/bin or something like
that.

You can use our viewacct program to turn the accounting log data into
something more readable.
http://www.servergraph.com/techtip3.htm

Hope this helps.
-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Joni Moyer
 Sent: Tuesday, June 24, 2003 8:04 AM
 To: [EMAIL PROTECTED]
 Subject: Verify lan-free transfer???


 Hello everyone!

 I am still having issues with the lan-free verification.  I cannot tell if
 the data went lan-free or if it just went directly to tape due to the
 management class that was specified.  Below I have included the activity
 log and output from the storage agent.   It seems that the storage agent
 then starts a session and mounts a tape to the LTO2_Drive-1 (2st), but I
 still am not sure how to tell if it went lanfree.  Shouldn't lanfree data
 bytes have the approximate size of the file?  In our output it is 0B.
 Thank you for any suggestions you may have!

 06/20/03 08:57:56 ANR0406I Session 4406 started for node PGSU017 (SUN

SOLARIS) (Tcp/Ip 157.154.43.36(44700)).

 06/20/03 08:57:58 ANR0406I Session 4407 started for node PGSU017 (SUN

SOLARIS) (Tcp/Ip 157.154.43.36(44702)).

 06/20/03 08:57:58 ANR0406I (Session: 4297, Origin: STORAGENT)  Session
 10
started for node PGSU017 (SUN SOLARIS) (Tcp/Ip

127.0.0.1(44703)).

 06/20/03 08:58:02 ANR0408I Session 4408 started for server STORAGENT

(Solaris 2.6/7/8 ) (Tcp/Ip) for library sharing.

 06/20/03 08:58:02 ANR8336I Verifying label of LTO volume 331ABJ in
 drive
LTO2_DRIVE-1 (/dev/rmt/3st).

 06/20/03 08:58:02 ANR8468I (Session: 4297, Origin: STORAGENT)  LTO
 volume
331ABJ dismounted from drive LTO2_DRIVE-1
 (/dev/rmt/2st)
in library 3584_LTO2.

 06/20/03 08:58:02 ANR0409I Session 4408 ended for server STORAGENT
 (Solaris
2.6/7/8 ).

 06/20/03 08:58:03 ANR0408I Session 4409 started for server STORAGENT

(Solaris 2.6/7/8 ) (Tcp/Ip) for library sharing.

 06/20/03 08:58:03 ANR8337I (Session: 4297, Origin: STORAGENT)  LTO
 volume
331ABJ mounted in drive LTO2_DRIVE-1
 (/dev/rmt/2st).

 06/20/03 08:58:03 ANR0409I Session 4409 ended for server STORAGENT
 (Solaris
2.6/7/8 ).

 06/20/03 08:58:37 ANR0408I Session 4410 started for server STORAGENT

(Solaris 2.6/7/8 ) (Tcp/Ip) for library sharing.

 06/20/03 08:58:37 ANR0409I Session 4410 ended for server STORAGENT
 (Solaris
2.6/7/8 ).

 06/20/03 08:59:28 ANR0403I Session 4407 ended for node PGSU017 (SUN

SOLARIS).

 06/20/03 08:59:29 ANR0403I (Session: 4297, Origin: STORAGENT)  Session
 10
ended for node PGSU017 (SUN SOLARIS).

 06/20/03 08:59:30 ANE4952I (Session: 4406, Node: PGSU017)
 Total number
 of
objects inspected:  286

 06/20/03 08:59:30 ANE4953I (Session: 4406, Node: PGSU017)
 Total number
 of
objects archived:   284

 06/20/03 08:59:30 ANE4958I (Session: 4406, Node: PGSU017)
 Total number
 of
objects updated:  0

 06/20/03 08:59:30 ANE4960I (Session: 4406, Node: PGSU017)
 Total number
 of
objects rebound:  0

 06/20/03 08:59:30 ANE4957I (Session: 4406, Node: PGSU017)
 Total number
 of
objects deleted:  0

 06/20/03 08:59:30 ANE4970I (Session: 4406, Node: PGSU017)
 Total number
 of
objects expired:  0

 06/20/03 08:59:30 ANE4959I (Session: 4406, Node: PGSU017)
 Total number
 of
objects failed:   0

 06/20/03 08:59:30 ANE4961I (Session: 4406, Node: PGSU017)
 Total number
 of
bytes transferred: 774.74 MB

 06/20/03 08:59:30 ANE4971I (Session: 4406, Node: PGSU017)
 LanFree data

bytes:   0  B

 06/20/03 08:59:30 ANE4963I (Session: 4406, Node: PGSU017)  Data
 transfer
time:   40.23 sec

 06/20/03 08:59:30 ANE4966I (Session: 4406, Node: PGSU017)
 Network data

transfer rate:19,716.33 KB/sec

 06/20/03 08:59:30 ANE4967I (Session: 4406, Node: PGSU017)  Aggregate
 data
transfer rate

Re: Verify lan-free transfer???

2003-06-24 Thread Mr. Lindsay Morris
My advice about using accounting logs to verify lan-free or not has nothing
to do with Servergraph.  We're big fans of TSM - I hope that advice was
helpful to Joni.

The viewacct script is a free offering - it helps to see what's in the
activity log.
Yes, it's on our website - I'm too lazy to cut and paste it into each email.
Maybe I should put it on coderelief, or adsm.org somewhere.

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Lambelet,Rene,VEVEY,GL-CSC
 Sent: Tuesday, June 24, 2003 10:27 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Verify lan-free transfer???


 Hi Lindsay,

 I appreciate your help in this forum. But this place should not be used to
 sell any goods (servergraph for instance), it is a technical forum.

 Regards,

 Reni LAMBELET
 NESTEC  SA
 GLOBE - Global Business Excellence
 Central Support Center
 Information Technology
 Av. Nestli 55  CH-1800 Vevey (Switzerland)
 til +41 (0)21 924 35 43   fax +41 (0)21 703 30 17   local
 K4-404
 mailto:[EMAIL PROTECTED]


 -Original Message-
 From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]
 Sent: Tuesday,24. June 2003 17:17
 To: [EMAIL PROTECTED]
 Subject: Re: Verify lan-free transfer???


 Look in the accounting logs.
 If it went LAN-free, there'll be a record for that session in the
 dsmaccnt.log file on the CLIENT;
 If it went over the LAN, there'll be a record for that session in the
 dsmaccnt.log file on the SERVER.
 These files usually live in /usr/tivoli/tsm/server/bin or something like
 that.

 You can use our viewacct program to turn the accounting log data into
 something more readable.
 http://www.servergraph.com/techtip3.htm

 Hope this helps.
 -
 Mr. Lindsay Morris
 Lead Architect
 www.servergraph.com
 512-482-6138 ext 105

  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
  Joni Moyer
  Sent: Tuesday, June 24, 2003 8:04 AM
  To: [EMAIL PROTECTED]
  Subject: Verify lan-free transfer???
 
 
  Hello everyone!
 
  I am still having issues with the lan-free verification.  I
 cannot tell if
  the data went lan-free or if it just went directly to tape due to the
  management class that was specified.  Below I have included the activity
  log and output from the storage agent.   It seems that the storage agent
  then starts a session and mounts a tape to the LTO2_Drive-1 (2st), but I
  still am not sure how to tell if it went lanfree.  Shouldn't
 lanfree data
  bytes have the approximate size of the file?  In our output it is 0B.
  Thank you for any suggestions you may have!
 
  06/20/03 08:57:56 ANR0406I Session 4406 started for node
 PGSU017 (SUN
 
 SOLARIS) (Tcp/Ip 157.154.43.36(44700)).
 
  06/20/03 08:57:58 ANR0406I Session 4407 started for node
 PGSU017 (SUN
 
 SOLARIS) (Tcp/Ip 157.154.43.36(44702)).
 
  06/20/03 08:57:58 ANR0406I (Session: 4297, Origin:
 STORAGENT)  Session
  10
 started for node PGSU017 (SUN SOLARIS) (Tcp/Ip
 
 127.0.0.1(44703)).
 
  06/20/03 08:58:02 ANR0408I Session 4408 started for server STORAGENT
 
 (Solaris 2.6/7/8 ) (Tcp/Ip) for library sharing.
 
  06/20/03 08:58:02 ANR8336I Verifying label of LTO volume 331ABJ in
  drive
 LTO2_DRIVE-1 (/dev/rmt/3st).
 
  06/20/03 08:58:02 ANR8468I (Session: 4297, Origin: STORAGENT)  LTO
  volume
 331ABJ dismounted from drive LTO2_DRIVE-1
  (/dev/rmt/2st)
 in library 3584_LTO2.
 
  06/20/03 08:58:02 ANR0409I Session 4408 ended for server STORAGENT
  (Solaris
 2.6/7/8 ).
 
  06/20/03 08:58:03 ANR0408I Session 4409 started for server STORAGENT
 
 (Solaris 2.6/7/8 ) (Tcp/Ip) for library sharing.
 
  06/20/03 08:58:03 ANR8337I (Session: 4297, Origin: STORAGENT)  LTO
  volume
 331ABJ mounted in drive LTO2_DRIVE-1
  (/dev/rmt/2st).
 
  06/20/03 08:58:03 ANR0409I Session 4409 ended for server STORAGENT
  (Solaris
 2.6/7/8 ).
 
  06/20/03 08:58:37 ANR0408I Session 4410 started for server STORAGENT
 
 (Solaris 2.6/7/8 ) (Tcp/Ip) for library sharing.
 
  06/20/03 08:58:37 ANR0409I Session 4410 ended for server STORAGENT
  (Solaris
 2.6/7/8 ).
 
  06/20/03 08:59:28 ANR0403I Session 4407 ended for node PGSU017 (SUN
 
 SOLARIS).
 
  06/20/03 08:59:29 ANR0403I (Session: 4297, Origin:
 STORAGENT)  Session
  10
 ended for node PGSU017 (SUN SOLARIS).
 
  06/20/03 08:59:30 ANE4952I (Session: 4406

Re: Work arounds for files deleted in flight?

2003-06-23 Thread Mr. Lindsay Morris
Chuck, would a missed-files report that grouped them by reason-missed help
you?
eg:
Files Missed for Nodexxx
Not found:  562
Changing:   2
Locked: 132

Then of course you'd want to be able to click each line, and drill down
to see the actual file names.  If you did that, do you think you'd want
to see (for the Locked list ,say) all 132 files, or just the file NAMES
that differ - that is, cut off the directory part?


-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Chuck Mattern
 Sent: Monday, June 23, 2003 7:55 AM
 To: [EMAIL PROTECTED]
 Subject: Work arounds for files deleted in flight?


 Ever since we transitioned from adsm v3 to adsm v4 we have encountered an
 extremely high failure rate.  Essentially when adsm went from backing up
 Unix filesystems like tar (see a file; get a file) to doing ti like dump
 (build a list of files; go back and backup the list) we began
 taking what I
 do not consider true failures.  Since we do not have the ability
 to quiesce
 our systems for backup many files that adsm identifies as backup
 candidates
 are deleted before they can be backed.  To avoid wasting many hours of
 engineer time logging into several hundred servers to investigate
 this I am
 writing a Perl utility to parse the logs, totalling the file not found
 failures and only reporting a failure back to us if there are more errors
 than the total number of file not found errors.  I took the
 issue up with
 ADSM support and essentially got that's the way it is now, sorry  Is
 anyone else having problem like this and if so can you offer any better
 solutions than the one I am working on?

 Thanks,
 Chuck

 Chuck Mattern
 The Home Depot
 Phone: 770-433-8211 x11919
 Email: [EMAIL PROTECTED]
 Pager: 770-201-1626



Re: SELECT equivalent for QUERY EVENT

2003-06-23 Thread Mr. Lindsay Morris
Richard, thanks for the performance note re select vs. query.

IMHO, a larger concern, if you're writing programs to analyze TSM's data, is
that the layout of the queries may change when you upgrade TSM, but the
field names
in the database won't.  So your programs won't break when you upgrade TSM.

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Richard Sims
 Sent: Monday, June 23, 2003 2:37 PM
 To: [EMAIL PROTECTED]
 Subject: Re: SELECT equivalent for QUERY EVENT


 as our VM mainframe is soon going to die I'm rewriting all my CMS REXX
 programs using REGINA and REXXSQL on Windows XP.
 As I have to rewrite the code I would like to brush it up and replace
 those QUERYs with SELECTs.

 Thomas - A couple of observations...

 - I would caution avoiding going to Selects, which seem the chic thing
   to do these days.  Query commands are more optimized for directly going
   into the TSM database to rapidly return results: Selects go through an
   artificial SQL construct on top of the actual B-tree database and
   entail considerable overhead - including database workspace.
   For example, contrast a Query CONTent with Select ... From Contents.
   Query commands are not passe: they are simply pre-programmed and that
   much less flexible.

 - Your VM mainframe went away because of obsolescence.  REXX is in the
   same category.  (And I say this having been a VM guy and huge fan of
   REXX.)  I would encourage you to get into Perl, which is available on
   every platform, and is now intrinsically supplied with more open
   operating systems these days.

 Richard Sims, BU



Re: Work arounds for files deleted in flight?

2003-06-23 Thread Mr. Lindsay Morris
Tony, doesn't the Unexpected events page in Servergraph let you click on
each of the ANE messages, and see every individual filename that was
missed / retried/ locked / etc?

I initially responded because I was looking for advice about how we could
better do this, but I think the way we do it now is exactly what you want.

Contact me off-line if not, and we'll discuss.
Thanks.
-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Garrison, Tony
 Sent: Monday, June 23, 2003 3:33 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Work arounds for files deleted in flight?


 Sounds good to me also.

 Anthony A. Garrison Jr.
 Sr. Systems Programmer
 USAA
 (210 456-5755

  -Original Message-
 From:   Chuck Mattern [mailto:[EMAIL PROTECTED]
 Sent:   Monday, June 23, 2003 11:34 AM
 To: [EMAIL PROTECTED]
 Subject:Re: Work arounds for files deleted in flight?

 Lindsay,

 This sounds great.  We would need full path names to evaluate the
 consequences of the event.  Is something like this already
 floating around?

 Chuck Mattern
 The Home Depot
 Phone: 770-433-8211 x11919
 Email: [EMAIL PROTECTED]
 Pager: 770-201-1626



   Mr. Lindsay
   Morris  To:
 [EMAIL PROTECTED]
   [EMAIL PROTECTED]cc:
   aph.com Subject:  Re: Work
 arounds for files deleted in flight?
   Sent by: ADSM:
   Dist Stor
   Manager
   [EMAIL PROTECTED]
   .edu


   06/23/2003 09:22
   AM
   Please respond to
   lmorris






 Chuck, would a missed-files report that grouped them by reason-missed help
 you?
 eg:
 Files Missed for Nodexxx
 Not found:  562
 Changing:   2
 Locked: 132

 Then of course you'd want to be able to click each line, and drill down
 to see the actual file names.  If you did that, do you think you'd want
 to see (for the Locked list ,say) all 132 files, or just the file NAMES
 that differ - that is, cut off the directory part?


 -
 Mr. Lindsay Morris
 Lead Architect
 www.servergraph.com
 512-482-6138 ext 105

  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
  Chuck Mattern
  Sent: Monday, June 23, 2003 7:55 AM
  To: [EMAIL PROTECTED]
  Subject: Work arounds for files deleted in flight?
 
 
  Ever since we transitioned from adsm v3 to adsm v4 we have
 encountered an
  extremely high failure rate.  Essentially when adsm went from backing
 up
  Unix filesystems like tar (see a file; get a file) to doing ti like dump
  (build a list of files; go back and backup the list) we began
  taking what I
  do not consider true failures.  Since we do not have the ability
  to quiesce
  our systems for backup many files that adsm identifies as backup
  candidates
  are deleted before they can be backed.  To avoid wasting many hours of
  engineer time logging into several hundred servers to investigate
  this I am
  writing a Perl utility to parse the logs, totalling the file not found
  failures and only reporting a failure back to us if there are
 more errors
  than the total number of file not found errors.  I took the
  issue up with
  ADSM support and essentially got that's the way it is now, sorry  Is
  anyone else having problem like this and if so can you offer any better
  solutions than the one I am working on?
 
  Thanks,
  Chuck



Re: Schedule for the last day of the month...every month

2003-06-20 Thread Mr. Lindsay Morris
Curtis, if I may ask: how can you monitor backup status (ie success or
failure of all your nodes) if you use autosys?

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Curtis Stewart
 Sent: Thursday, June 19, 2003 11:10 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Schedule for the last day of the month...every month


 We've just decided to get rid of the whole TSM scheduler for the most
 part and go with an external schedular for TSM backups (AutoSys). It
 fixes the whole last day of the month, or quarter or whatever problem
 easily. It also lets us quit using the TSM scheduler on our clients.
 This may not be the right answer for you, but it looks like it's going
 to work fine for us.

 curtis stewart

 Kent Monthei wrote:

 You might be able to develop  schedule a little script which
 a) does a 'delete schedule'
 b) goes through a loop that performs a 'define schedule' for the
 31st/30th/29th/28th (in that order)
 c) after each 'define schedule' attempt, checks the Return Code
 (or output of 'q sched')
 d) exit if/when the 'define schedule' is successful.
 
 Then schedule the script to run on any of the 1st through 28th day of the
 month.
 
 Kent Monthei
 GlaxoSmithKline
 
 
 
 
 
 Bill Boyer [EMAIL PROTECTED]
 
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 19-Jun-2003 13:14
 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
 
 
 
 
 To: ADSM-L
 
 cc:
 Subject:Schedule for the last day of the
 month...every month
 
 I have a client that wants to do monthly backups on the last day of the
 month. A co-worker did some testing and creating a schedule for say
 5/31/3002 with PERU=MONTH. The June event gets scheduled for 6/30, but
 then
 remains the 30th day from then on. Until Feb next year when it moves to
 the
 29th.
 
 Outside of creating a schedule for each month with a PERU=YEAR,
 is there a
 way to do a schedule on the last day of every month??
 
 TIA,
 Bill Boyer
 Some days you are the bug, some days you are the windshield. - ??
 
 
 



Re: Schedule for the last day of the month...every month

2003-06-20 Thread Mr. Lindsay Morris
Thanks, Alex - brilliant idea - and that's how Servergraph already does it.
We give the TSM admins visibility into the status, checked daily, fo how old
the backups are.
So they need not care about schedules - just whether things are current or
not.

The really great side benefit from this is finding wasted space.
That is, if a filespace ahs not backed up in 90 days, does it still exist?
Probably not, except on TSM tapes.  Delete it.

We had one customer free up 4 TB of defunct filespaces.  Anybody who's short
on library space
should check for that one way or another.
(IMHO)

-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: Alex Paschal [mailto:[EMAIL PROTECTED]
 Sent: Friday, June 20, 2003 12:25 PM
 To: '[EMAIL PROTECTED]'; '[EMAIL PROTECTED]'
 Subject: RE: Schedule for the last day of the month...every month


 Hi, Lindsay.

 I would suggest also monitoring backup status using a select statement to
 pull out filespaces that didn't complete backup within x amount of time,
 even if you use the TSM scheduler.  This is useable with autosys.  Also,
 autosys scripts can report based on RC, and since we now have meaningful
 RC's from the client, that probably provides autosys reasonable
 success/failure notification.  My autosys controlled backups email me on
 failure, but I'm using scripts to parse the dsmc output for some weird
 things, rather than using the dsmc RC.

 The problem we run into is that our TSM admins don't have visibility into
 the Autosys schedules for load balancing purposes.  I'm still trying to
 figure that one out.  Maybe a weekly autosys report?  Hmm...

 Alex Paschal
 Freightliner, LLC
 (503) 745-6850 phone/vmail


 -Original Message-
 From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]
 Sent: Friday, June 20, 2003 12:25 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Schedule for the last day of the month...every month


 Curtis, if I may ask: how can you monitor backup status (ie success or
 failure of all your nodes) if you use autosys?

 -
 Mr. Lindsay Morris
 Lead Architect
 www.servergraph.com
 512-482-6138 ext 105

  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
  Curtis Stewart
  Sent: Thursday, June 19, 2003 11:10 PM
  To: [EMAIL PROTECTED]
  Subject: Re: Schedule for the last day of the month...every month
 
 
  We've just decided to get rid of the whole TSM scheduler for the most
  part and go with an external schedular for TSM backups (AutoSys). It
  fixes the whole last day of the month, or quarter or whatever problem
  easily. It also lets us quit using the TSM scheduler on our clients.
  This may not be the right answer for you, but it looks like it's going
  to work fine for us.
 
  curtis stewart
 
  Kent Monthei wrote:
 
  You might be able to develop  schedule a little script which
  a) does a 'delete schedule'
  b) goes through a loop that performs a 'define
 schedule' for the
  31st/30th/29th/28th (in that order)
  c) after each 'define schedule' attempt, checks the Return Code
  (or output of 'q sched')
  d) exit if/when the 'define schedule' is successful.
  
  Then schedule the script to run on any of the 1st through 28th
 day of the
  month.
  
  Kent Monthei
  GlaxoSmithKline
  
  
  
  
  
  Bill Boyer [EMAIL PROTECTED]
  
  Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
  19-Jun-2003 13:14
  Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
  
  
  
  
  To: ADSM-L
  
  cc:
  Subject:Schedule for the last day of the
  month...every month
  
  I have a client that wants to do monthly backups on the last day of the
  month. A co-worker did some testing and creating a schedule for say
  5/31/3002 with PERU=MONTH. The June event gets scheduled for 6/30, but
  then
  remains the 30th day from then on. Until Feb next year when it moves to
  the
  29th.
  
  Outside of creating a schedule for each month with a PERU=YEAR,
  is there a
  way to do a schedule on the last day of every month??
  
  TIA,
  Bill Boyer
  Some days you are the bug, some days you are the windshield. - ??
  
  
  
 




Re: TSM Decision support 4.2.0 tools

2003-06-12 Thread Mr. Lindsay Morris
Gee, we've talked to a lot of people about TDS, of course -
it used to be considered competition for Servergraph -
and most say it's
hard to install
hurts performance on the TSM server
doesn't give them the info they want.

You might want some comments from former TDS users before you go too far
down this path.
-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Lawrence Clark
 Sent: Thursday, June 12, 2003 1:24 PM
 To: [EMAIL PROTECTED]
 Subject: TSM Decision support 4.2.0 tools


 Problem:
 Loaded TSM Decision Support V4.2.0 Loaded and configured and populated
 a MSSQL DB with the 1st set of data.

 Tried installing Decision Support for Storage Management Analysis V
 4.2.0 and I get
 message: UNABLE TO FIND REGISTRY ENTRY FOR TDS GUIDES CANNOT BE
 INSTALLED.

 Workaround anyone?
 Also, anyone use the loader and analysis tools, and any comments?





 Larry Clark
 NYS Thruway Authority
 (518)-471-4202
 Certified:
 Aix 4.3 System Administration
 Aix 4.3 System Support
 Tivoli ADSM/TSM V 3 Consultant



Re: select to compare filespace usage with backup storage used

2003-06-09 Thread Mr. Lindsay Morris
Servergraph takes a hog_factor reading daily (along with other readings) -
for each client node, it's:
total GB used in all primary storage pools
divided by
total GB used on the client's hard disk

We get this from q filespace and q occ f=d (basically),
and then doing some crunching on the data.  Your query
might be simpler if you can use q auditocc - but we can't
depend on that, because some people may not run
the audit license command frequently enough to keep it
up-to-date.

It's always fascinating to see a new site's list of clients,
sorted by hog_factor, with one or two at the top in the hundreds -
i.e., the client uses a hundred times more space in TSM than it owns
locally.

Often this is due to unnecessary archiving, or TDP agent failing to
delete old backups.


-
Mr. Lindsay Morris
Lead Architect
www.servergraph.com
512-482-6138 ext 105

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Steve Bennett
 Sent: Monday, June 09, 2003 6:41 PM
 To: [EMAIL PROTECTED]
 Subject: select to compare filespace usage with backup storage used


 TSMers,

 I was thinking about trying to write a select that would let me compare
 a clients known filespace usage (q file) with the amount of TSM backup
 storage used by the client (q occ). Such a select could give us a %
 number that we could use to estimate a clients storage needs before it
 is actually registered and backed up. Of course the % would be highly
 dependent upon policies, etc. but would be a good starting point for
 clients using our standard policy set and management classes.

 Not wanting to unnecessarily beat myself up, have any of you written
 such a select and would you be willing to share your code and/or ideas?

 --

 Steve Bennett, (907) 465-5783
 State of Alaska, Information Technology Group, Technical Services
 Section



Re: What is the game plan for mixing clients on the same server?

2002-12-09 Thread Mr. Lindsay Morris
Well, IMHO:

Just because you CAN have lots of different domains does NOT mean that you
SHOULD.
For simplicity's sake, many sites can have ONE domain and ONE schedule that
will work OK for most nodes.

I don't think you need to be concerned about NT users accidentally restoring
AIX stuff - they have to try pretty hard (using VIRTUALNODENAME, etc) to
have one machine recover another machine's files.

If you want to separate includes and excludes by OS, you should use client
option sets for that, not domains.

Where you MIGHT want to start breaking stuff into different domains is when
you have legal or technical retention policies: like, 7-year-legal
requirement here, keep-many-versions for the developers there, etc.  This is
because the most important thing you get by putting a client node into a
domain is a default POLICY, ie how many versions and how many days to retain
files.  (You can overide the default policy, of course, by using a different
management class for the node, or even a directory or filename pattern on
the node, in the include-exclude list.)

So you could have:
NodeOS  DeptDomain  Cloptset
ordinaryNT  AcctSTANDARDNTOPTS
acctarchive WinXP   Acct7YEAR   XPOPTS
newsystem   AIX AcctDEVEL   AIXOPTS
oldspecsNT  Engineering 7YEAR   NTOPTS
joespc  WinXP   Engineering STANDARDXPOPTS
geniuspcSolaris Engineering DEVEL   SOLOPTS

and so on.


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Robert L. Rippy
 Sent: Monday, December 09, 2002 12:34 PM
 To: [EMAIL PROTECTED]
 Subject: Re: What is the game plan for mixing clients on the same
 server?


 True.





 From: Hagopian, George [EMAIL PROTECTED] on 12/09/2002 12:22 PM

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 To:   [EMAIL PROTECTED]
 cc:
 Subject:  Re: What is the game plan for mixing clients on the same server?

 I guess the same would be for the users so the Win restore people wouldn't
 touch the AIX stuff?

 -Original Message-
 From: Robert L. Rippy [mailto:[EMAIL PROTECTED]]
 Sent: Monday, December 09, 2002 12:10 PM
 To: [EMAIL PROTECTED]
 Subject: Re: What is the game plan for mixing clients on the same
 server?


 There is no problem. I have NT,W2K, 95/98 MAC AIX all going to
 once server.
 Just plan correctly so that its managable. I would suggest seperate
 domains.

 Thanks,
 Robert Rippy.




 From: Hagopian, George [EMAIL PROTECTED] on 12/09/2002 12:04 PM

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 To:   [EMAIL PROTECTED]
 cc:
 Subject:  What is the game plan for mixing clients on the same server?

 I have been running TSM for all my AIX boxes...now I will be moving all
 Windows boxes to TSM as well...excited about doing this (yeah yeah yeah)
 but
 after thinking about it, would it be better to put all the Win boxes
 (2k,NT)
 to their own TSM server?
 What are the issues...if any...for having 40 or so Win boxes sharing the
 same TSM db and server with 15 large AIX boxes...

 TIA
 George Hagopian

 PS I also posted this message online forum...not sure what kind of hits it
 is getting




Re: Bad link to viewacct script?

2002-12-07 Thread Mr. Lindsay Morris
MessageWe fixed the broken link to our viewacct utility: go to
http://www.servergraph.com/techtip3.htm
Thanks to everyone who pointed this out.

You should be aware that this utility ONLY SHOWS BACKED_UP STATISTICS - it
does NOT show archived data, or restores, or retrieves, or HSM movement.
We keep meaning to add this but are apparently never going to get around to
it - someone else may feel free and re-post an invigorated version.

Thanks
  -Original Message-
  From: Thorson, Paul [mailto:[EMAIL PROTECTED]]
  Sent: Friday, December 06, 2002 5:09 PM
  To: [EMAIL PROTECTED]
  Subject: Bad link to viewacct script?


  Hello Mr. Morris,

  Over the years I've used REXX and awk to parse out the ADSM/TSM accounting
records.  I have tried to access your company's viewacct script (in perl?),
but the link from the TechTip #3 page gives me an error.

  A real problem or is my web browser goofy?

  Regards,

  - Paul

  Paul Thorson
  Levi, Ray  Shoup, Inc.
  Tivoli Specialist - LRS IT Solutions
  (217) 793-3800 x1704



Re: aix client installation

2002-12-06 Thread Mr. Lindsay Morris
That's interesting, Lars.  I'd never thought of doing that.
So what benefit might we get by using AIX's System Resource Controller
(that's what SRC means, right?) to start the TSM scheduler?

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Lars Bebensee
 Sent: Friday, December 06, 2002 8:11 AM
 To: [EMAIL PROTECTED]
 Subject: Re: aix client installation


 On 06.12.2002 11:20:57 ADSM: Dist Stor Manager wrote:

  dear all!
 
  I just intalled 2 aix clients as described in the readme file
  (aix 4.3, tsm client 4.2.2.1)
  all worked fine, but I am not sure, how to start the scheduler correct.
 
  I inserted the following lines to /etc/inittab
 
  tsm::once:/usr/bin/dsmc sched  /dev/null 21
  tsmws::once:/usr/tivoli/tsm/client/ba/bin/dsmcad  /dev/null 21
 
  on an other older aix machine (aix 4.3, tsm 4.1.2) the inittab is
 different:
 
  tsm:2:once:startsrc -s dsmc /dev/console 21 #Start TSM Client
 Scheduler
  tsmws:2:once:/usr/tivoli/tsm/client/ba/bin/rc.adsmws /dev/console 21
 #Start
  Webshell
 
  why can't I use startsrc on the 1st machine?

 Hi there,

 as far as I know there's no src functionality for the tsm client available
 per default. So unless someone seriously got  srcmstr convinced, it would
 not start the dsmc in scheduling mode. You will have to start the client
 side scheduler with dsmc sched like in the first example.
 However it should be possible to add a subsystem to the system resource
 controller with something like:
 $ mkssys -s dsmc -p /usr/tivoli/tsm/client/ba/bin/dsmc -u0 -R -S -n15 -f3
 -a  sched
 Hereafter it should be possible to start and stop the scheduler via the
 startsrc and stopsrc commands. I would actually not name the
 subsystem dsmc
 but something like dsmsched to avoid confusion with invoking the dsmc
 command line client.

 Cya Lars




Re: Client locks up during backup

2002-12-06 Thread Mr. Lindsay Morris
Boost the idletimeout.  300 seconds is often too short.
Check the dsmsched.log file on the client for error messages.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Jacque Mergens
 Sent: Friday, December 06, 2002 8:25 AM
 To: [EMAIL PROTECTED]
 Subject: Client locks up during backup


 I have a site that is running
 TSM 4.2.1 AIX server OS 5100-02
 4.2.1 Linux client Kernel 2.4.17

 This client has been running fine and now locks up between
 filesystems.  It reports that it has successfully backed up
 a filesystem and then times out at the server end (300 seconds)
 terminates the backup.  On the client side I still see the
 scheduler process running yet nothing happens.

 I can't upgrade this site immediately but need to get these
 backups running.

 Any help would be appreciated.

 Jacque




Re: Ned a SQL QUERY - Where did my big files come from?

2002-12-06 Thread Mr. Lindsay Morris
Jack, you might look at our viewacct script -
it would show you which clients had the largest backups last night - then
you can look at the dsmsched.log file on those clients and find the big
file.
For the viewacct script, see Mining the Accounting Log, at
http://www.servergraph.com/techtipshome.shtml.

When you get around to looking at the dsmsched.log file, here's a script
(not used in a year, but was good the last time I looked) that will find
large files in your dsmsched.log that are getting sent every day.  You'll
have to get the dsmsched.log file over to a unix box, or put MKS tools or
something on the windows box.

Good luck!


=
#!/usr/bin/ksh
# analyze a dsmsched.log file: look for large files that got backed up
if [ $# -ne 1 ]
then echo usage: $0 dsmchedlog-file name; exit 1
fi
fgrep Normal File $1 | awk '{print $1, $5, $6}' | sed 's/,//g' |  sort
+0r -1  +1rn -2  /tmp/$$.1
# that file has all dates, all files.
# just get the largest 10 files from yesterday, and
# then look for them on other dates too.  We might see we're backing up
# the same file over and over again.
sed 10q /tmp/$$.1 | awk '{print $3}' /tmp/$$.2
fgrep -f /tmp/$$.2 /tmp/$$.1 | sort +2 -3 +0r -1
rm /tmp/$$.[12]

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Coats, Jack
 Sent: Friday, December 06, 2002 9:15 AM
 To: [EMAIL PROTECTED]
 Subject: Ned a SQL QUERY - Where did my big files come from?


 I have some big files (10+G and one 31+Gig file) that is being backed up.
 Is there a query that I can use to find what the big files are and what
 client they belong to?




Re: Ned a SQL QUERY - Where did my big files come from?

2002-12-06 Thread Mr. Lindsay Morris
Great to see you back, Richard.

Not to quibble with the guru, but --
a: how do you know the volume name?
b: query content? Won't that take a long time and maybe adversely impact the
TSM server's performance?

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Richard Sims
 Sent: Friday, December 06, 2002 9:22 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Ned a SQL QUERY - Where did my big files come from?


 I have some big files (10+G and one 31+Gig file) that is being backed up.
 Is there a query that I can use to find what the big files are and what
 client they belong to?

 The TSM administrator sometimes has to determine what files a current or
 recent backup has sent.  The simplest way is, knowing the volume they went
 to:

query content Volname count=-20 format=detailed

 for example, where some reasonable negative number will show you
 the latest
 files on the volume, listed most recent first.  This is easiest with large
 backup files, like yours, where they will very likely show up
 within a limited
 count value.

   Richard Sims, BU




Re: Bocada .v. Servograph/TSM .or. alternatives??

2002-12-02 Thread Mr. Lindsay Morris
Tony, I'd be interested to see whether Bocada's backup-status reports show
you
-whether any FILES were missed (e.g. NTUSER.DAT and such)
-if there are old filespaces wasting apce in your library.

Bocada's good feature is that it works across several products: TSM,
Veritas, etc.
But it doesn't seem to go very deep.

Please let me know if I'm wrong about this. Thanks.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Tony Morgan
 Sent: Monday, December 02, 2002 11:03 AM
 To: [EMAIL PROTECTED]
 Subject: Bocada .v. Servograph/TSM .or. alternatives??


 Hi World,

 I am looking to buy a TSM reporting tool, with Billing capabilities.

 I am aware of Bocada and Servograph/TSM.

 Does anyone have knowledge of either, or something better???

 Any information will be gratefully received.

 Best Regards  Seasons Greetings (we all have something to celebrate!!)

 Tony Morgan
 Fortis Bank
 London

 

 This e-mail and any files transmitted with it, are confidential and
 intended solely for the use of the addressee. The content of this
 e-mail may have been changed without the consent of the originator.
 The information supplied must be viewed in this context. If you have
 received this e-mail in error please notify our Helpdesk by
 telephone on +44 (0) 20-7444-8444. Any use, dissemination,
 forwarding, printing, or copying of this e-mail or its attachments is
 strictly prohibited.




Re: eventlogging after installing Servergraph

2002-11-29 Thread Mr. Lindsay Morris
Niklas, Servergraph makes use of the ANE events - for instance, you can see
ALL the missed files (like NTUSER.DAT, etc) in one place, on the Unexpected
Events page - but not if you disable the events, of course.

Contact [EMAIL PROTECTED], and we'll get things working the way you
want them.
Thanks.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Niklas Lundstrom
Sent: Friday, November 29, 2002 6:13 AM
To: [EMAIL PROTECTED]
Subject: eventlogging after installing Servergraph


Hello

I did a testinstallation of Servergraph yesterday and today I get ANE events
in my console
It seems that Servergraph enable a lot of events to be logged. How do  get
rid of the ANE events from my console?

MVH
Niklas Lundstrvm
Swedbank



Re: Activity Log

2002-11-20 Thread Mr. Lindsay Morris
If you're concerned about your activity log eating up too much space in your
TSM database, don't be.  It uses relatively tiny amounts compared to the
contents table.  But there'e probably no good reason to keep more than 30
days.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Nelson Kane
 Sent: Wednesday, November 20, 2002 9:44 AM
 To: [EMAIL PROTECTED]
 Subject: Activity Log


 Good Morning TSM'ers,
 Can someone tell me where the activity log maps to on the AIX side.
 I would like to expand the duration of the log, but I need to
 know how much
 is being utilized currently before I do that.
 Thanks in advance
 -kane




Re: dsmadmc with options

2002-11-20 Thread Mr. Lindsay Morris
Add another server stanza in dsm.opt:
servername other
tcpserveraddress other.dns.name.or.ip
then you can say
dsmadmc -se=other ...


-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Jin Bae Chi
 Sent: Wednesday, November 20, 2002 3:24 PM
 To: [EMAIL PROTECTED]
 Subject: dsmadmc with options


 Hi, TSMers,

 I'm managing 2 TSM servers on AIX now. When I installed TSM client on
 my desktop WindowsXP, I could install also command line admin. But, I
 can only use it for one server. Is there anyway that I can specify 2
 different servername or TCPIPaddress for servers so that I can start
 'dsmadmc' with option flag for each server? I checked admin
 documentation and couldn't find dsmadmc command option for this purpose.
 Thanks for your help as always!



 Jin Bae Chi (Gus)
 System Admin/Tivoli
 Data Center
 614-287-5270
 614-287-5488 Fax
 [EMAIL PROTECTED]




Re: kindly help - dsmsched.log: is it only fro local client inTS M?

2002-11-19 Thread Mr. Lindsay Morris
Murali, be aware that some clients /s ituations (like on a restarted resore)
will put incorrect data in these statistics records.  We got burned by that
so often developing Servergraph that we went to a completely different
method of reporting - use q filespace for the daily status (is it backed
up or not), and accounting logs to detect how many bytes it sent.

You can look at the summary table, but that is sometimes erroneous, too (or
so I hear).

Not the answer you were looking for, I know, but I hope it saves you some
wasted effort.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 murali ramaswamy
 Sent: Tuesday, November 19, 2002 5:04 PM
 To: [EMAIL PROTECTED]
 Subject: Re: kindly help - dsmsched.log: is it only fro local client
 inTS M?


 Hi,
 Thanks.
 See, the log file gives neatly the backup details for each schedule along
 with exceptions that occurred as given at end.  How do I get in one query
 all this information along with Exceptions if any for a schdule at a given
 time?  Could you please provide me with a query that will work even if the
 ACTLOG gets bigger (as I have seen when ACTLOG becomes bigger in size the
 query does not give full results as it breaks in the middle with
 the message
 'communication session ended. reconnect to get the results'.)
 Thanks
 -murali
 11/08/2002 14:02:34 Total number of objects inspected:9,996
 11/08/2002 14:02:34 Total number of objects backed up:   74
 11/08/2002 14:02:34 Total number of objects updated:  0
 11/08/2002 14:02:34 Total number of objects rebound:  0
 11/08/2002 14:02:34 Total number of objects deleted:  0
 11/08/2002 14:02:34 Total number of objects expired:372
 11/08/2002 14:02:34 Total number of objects failed:  18
 11/08/2002 14:02:34 Total number of bytes transferred: 9.61 MB
 11/08/2002 14:02:34 Data transfer time:1.58 sec
 11/08/2002 14:02:34 Network data transfer rate:6,225.75 KB/sec
 11/08/2002 14:02:34 Aggregate data transfer rate:289.51 KB/sec
 11/08/2002 14:02:34 Objects compressed by:0%
 11/08/2002 14:02:34 Elapsed processing time:   00:00:34






 From: Damron, Nancy E. [EMAIL PROTECTED]
 Reply-To: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Re: kindly help - dsmsched.log: is it only fro local
 client  inTS
 M?
 Date: Tue, 19 Nov 2002 13:49:33 -0800
 
 Hi
 The act log is the only centralize location that will give you
 information
 on all your backups. The dsmsched.log is just that...a log of
 the schedule
 that ran on that box.
 
 -Original Message-
 From: murali ramaswamy [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, November 19, 2002 1:42 PM
 To: [EMAIL PROTECTED]
 Subject: kindly help - dsmsched.log: is it only fro local client inTSM?
 
 
 Hi,
Can you please tell if the dsmsched.log in Tivoli Storage Manager the
 location of which is configured through the TSM client GUI is only for
 displaying log information for the local client ( the client installed in
 the same box) only? I have on my machine(Windows2000) installed both
 TSM
 server and client and there is another machine in network mc-A
 which is also Windows2000, on which I installed TSM client.  In the
 dsmsched.log, after scheduling backups on my machine and mc-A, I see only
 information about my machine backups and not about mc-A.  But on querying
 ACTLOG I could see information of backup about mc-A also.  So is this log
 only for client which is installed on my box and is there not any
 centralized server log that could give all backups from all other clients
 for this TSM server?
 Thanks
 -murali
 
 
 
 
  From: David Longo [EMAIL PROTECTED]
  Reply-To: ADSM: Dist Stor Manager [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Subject: Re: Please help - ANS1311E Server out of data storage space
 inTSM
  Date: Tue, 19 Nov 2002 15:54:08 -0500
  
  For the 4.2 Web Admin drill down with:
  
  Object view
  Server Storage
Disk Storage Pools
  
  Then select your disk pool, then Update it and set
  Cache Migrated Files to Yes.
  
  From command line it is the Update STG command.
  
  David Longo
  
[EMAIL PROTECTED] 11/19/02 02:36PM 
  Hi,
 How do I do the disk caching etc through Web Admin GUI page on the
  server?
 I did not set anything manually (I dont know where to do that).  Is
  it
  disabled by default?  How do I move data of volumes through GUI or any
  tool?
  Thanks
  -murali
  
  
  
  
  
  
   From: J D Gable [EMAIL PROTECTED]
   Reply-To: ADSM: Dist Stor Manager [EMAIL PROTECTED]
   To: [EMAIL PROTECTED]
   Subject: Re: Please help - ANS1311E Server out of data storage space
  in
   TSM
   Date: Tue, 19 Nov 2002 13:25:11 -0600
   
   Do you have disk caching enabled

Re: SQL 2.2.1

2002-11-13 Thread Mr. Lindsay Morris
Bill's right (as usual).  The lack of backup start-end times on individual
filespaces is pretty crucial.

The way I'd like to see reporting of databases (if I were king) would be
similar to client nodes:
a client node has filespaces
a database has tablespaces
both of those things have (in concept at least)
   - a last-backup end time
   - a size in GB
   - a percent-full reading

Presently, TDP for XXX agents don't tell you this at the filespace level.
But there's a fairly easy way around it



(OK, the below MIGHT be construed as advertising - be warned and hang up now
if you're offended)
sure seems like a problem-solver to me though...



But there's a fairly easy way around it, if you're using Servergraph anyway:
-- run a daily SQL script on the database clients, to generate a table like
this:
databasenametablespacename  sizepercent-full
backup-end(maybe)
   You can probably get the backup-end date by scanning the TDP client
logs.
-- use sgdwrite to pump that into the Servergraph database over the
network.
   (We'll be glad to do this if somebody will write the Oracle/Informix/etc
SQL.)

Now Servergraph will treat the database and its tablespaces as just another
client node with filespaces, and use the same logic as for client nodes to
determine whether the DB was currently backed up.

A great incidental benefit: since we keep trending history on all
measurements, the percent-full reading will generate  predictions and/or
alerts on when each individual tablespace INSIDE the database will fill up!

Your DBA's ought to love that.


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Bill Boyer
 Sent: Wednesday, November 13, 2002 10:13 AM
 To: [EMAIL PROTECTED]
 Subject: Re: SQL 2.2.1


 Del,
 The filespace information only shows be the backup
 start/end for the
 INSTANCE. I could go in an backup a single database and the filespace
 information will change? It doesn't tell me that there is a database in an
 instance that hasn't backed up in xx-days, like I could get that
 information
 from B/A client filespaces. I believe Mr. Lindsay Morris says that
 ServerGraph uses the filespace information to list filespaces that haven't
 been backed up in x-days, or even last night. He uses this information
 instead of the client completion or event records for
 successfully backups.

 Take DB2 for example...there is a filespace for each
 database backed up.
 Now the backup start/end times aren't maintained, but if they
 were this type
 of information would be of more benefit in tracking backups, cleanup,...

 Now I love the strides taken in the TDP agent reporting,
 but there are
 still some things I would like to see. Makes central monitoring a bit
 easier...

 Bill Boyer
 DSS, Inc.
 There are 10 kinds of people in the world those who understand binary and
 those who don't. - ??


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Del Hoobler
 Sent: Wednesday, November 13, 2002 9:28 AM
 To: [EMAIL PROTECTED]
 Subject: Re: SQL 2.2.1


 Mehdi,

 What filespace information are you referring to?

 DP for SQL will not update:
Capacity (MB)
Pct Util
 because it is ambiguous in the context of DP for SQL.
 Remember, those numbers correlate back to a volume
 capacity and how much of it is utilized.

 but DP for SQL will update:
Node Name
Filespace Name
FSID
Platform
Filespace Type
Last Backup Start Date/Time
Last Backup Completion Date/Time

 Thanks,

 Del

 

 Del Hoobler
 IBM Corporation
 [EMAIL PROTECTED]

 - Never cut what can be untied.
 - Commit yourself to constant improvement.

 

  Can someone email me a good example of DSM.OPT, SQLFULL.CMD for
 SQL 2.2.1
 as
  well as the batch file they use to start SQL Scheduler Service.
  I do run
 my
  SQLFULL.CMD successfully and the log shows me Number of bites sent but
 TSM
  Server does not update its FileSpace information.




Re: Problems with 5.1.5.2

2002-11-13 Thread Mr. Lindsay Morris
Accounting is broken?  Hard to believe...it's always been reliable.
Could it be that you're not looking at the storage agent's accounting logs
(if you use any LAN-free storage agents)?
They live on the client, and all that data is indeed missing from the
server's accounting logs. You have to combine them manually.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Mahesh Tailor
 Sent: Wednesday, November 13, 2002 2:49 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Problems with 5.1.5.2


 Also [on 5.1.5.2], you will see that the summary table is broken [yes,
 again!] and, if you use accounting, it does not work either!

 I have a PMR open for the summary table issue and I have alerted
 support of the accounting issue also.

 Mahesh

  [EMAIL PROTECTED] 11/13/02 02:19PM 
 Are these NT/W2K clients?  What level of TSM did you upgrade from?  We
 are seeing a similar problem at tsm 5.1.1.0 on AIX 5.1.0.0 with both
 server and aix in 64 bit mode.  I have an open issue with Tivoli  (PMR
 68765,082) and I'm waiting on a client to get back from vacation so we
 can run a trace during backup.

 David

  [EMAIL PROTECTED] 11/13/02 01:41PM 
 I am running tsm 5.1.5.2 on AIX 5.1.0.2 both are running in 64bit
 mode.
 After I upgraded to 5.1.5.2 every morning I have a few sessions that
 are
 running extremely slow at 20k/s.  They start out fine at night running
 at
 40MB/s on the private gig network setup for the two servers.  Before
 they
 would complete backup at 3:20am now it just seems to hang and stay in
 a
 recw
 mode receiving data very slowly.

 Any body experiencing anything like this?

 Thanks

 ***EMAIL DISCLAIMER***
 This
 email and any files transmitted with it may be confidential and are
 intended
 solely for the use of th individual or entity to whom they are
 addressed.
 If you are not the intended recipient or the individual responsible
 for
 delivering the e-mail to the intended recipient, any disclosure,
 copying,
 distribution or any action taken or omitted to be taken in reliance on
 it,
 is strictly prohibited.  If you have received this e-mail in error,
 please
 delete it and notify the sender or contact Health Information
 Management
 312.996.3941.




Re: ITSM Vs. Veritas NBU

2002-11-09 Thread Mr. Lindsay Morris
This is a great report!
It's past time somebody took the time to just DO THE MEASUREMENTS!
Everybody should have this handy for when the high-powered Veritas salesmen
approach your CIO.
Thanks for putting it on your site, Mark!

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Mark D. Rodriguez
 Sent: Saturday, November 09, 2002 8:44 PM
 To: [EMAIL PROTECTED]
 Subject: ITSM Vs. Veritas NBU


 Hi,

 Sorry I am reposting this but the first post didn't format correctly.

 Attached below is a news release that has just come out or will be
 coming out next week.  In addition, I have the detailed PDF document of
 the independent study that was done comparing the 2 products.  I
 wouldn't call it a slam dunk win for ITSM but it was a very clear victory.
 I am looking forward to some discussion on this list about the report. I
 think there are a couple of areas that the test was not fair (favored
 Veritas) but ITSM won anyway, but I will bring that up after some people
 have looked over the report.

 If anyone is interested I can send them copies of the PDF file.  Also, I
 will be putting on my web site shortly (by this evening),
 http://mdrconsult.com and you will be able to get it from there as well.

 In addition to the press release below and the PDF report there are two
 other files, a Power Point presentation and a FAQ about the study.

 --
 Regards,
 Mark D. Rodriguez
 President MDR Consulting, Inc.

 ==
 =
 MDR Consulting
 The very best in Technical Training and Consulting.
 IBM Advanced Business Partner
 SAIR Linux and GNU Authorized Center for Education
 IBM Certified Advanced Technical Expert, CATE
 AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
 Red Hat Certified Engineer, RHCE
 ==
 =


 IBM Tivoli Storage Manager Demonstrates Critical Advantages Over NetBackup

 NEW YORK CITY, NY, October XX, 2002 - Results of recently completed
 benchmark tests administered by analyst firm Progressive Strategies show
 IBM Tivoli Storage Manager offering distinct advantages in backup and
 restore performance over VERITAS NetBackup.

 Tivoli has always maintained that IBM Tivoli Storage Manager's
 advantages in performance and tape resource conservation accrue over
 time, explains Barry Cohen, Project Director at Progressive Strategies.
 Our test scenario bears that out. This product is much better suited to
 the realities that companies encounter today and will encounter in the
 future in their data management and storage environments, said Craig
 Norris, analyst with Progressive Strategies. NetBackup is
 unquestionably a solid product, however, limitations integral to its
 architecture and its traditional approach to backup and restore
 processes become problematic as the amount of data in a business grows,
 and as the IT infrastructure evolves to manage it all.

 This ambitious benchmark test sought to establish a realistic business
 data management scenario by regularly backing up data from multiple
 client computers over a month long period. The test results reveal that
 over this more realistic time period, IBM Tivoli Storage Manager
 exhibits significantly better performance in overall backup and restore
 in comparison to Veritas NetBackup.  In these tests, the complete backup
 cycle was 36% faster for IBM Tivoli Storage Manager, and it was 66%
 faster than NetBackup in restore tests. NetBackup sent considerably more
 data over the network and used four times more tape media to hold that
 data over the length of the testing period.

 Descriptions of the benchmark and results of the tests are presented as
 an appendix to an in-depth research report that examines features and
 functions of IBM Tivoli Storage Manager and compares them with
 NetBackup. The report examines how the products address several key
 areas critical to backup and restore solutions for business data. Its
 primary focus is on product architecture and its impact on cost of
 ownership and return on investment, the flexibility to adapt to rapidly
 evolving business requirements, and the cost benefits derived from
 implementing standards-based end-to-end solutions. The report concludes
 that IBM Tivoli Storage Manager offers the data protection solution more
 appropriate to what businesses require today.

 Both VERITAS and IBM Tivoli demonstrate a firm grasp of the challenges
 facing modern businesses. These challenges include burgeoning data and
 complex, ever-expanding storage infrastructures. However, the
 Progressive Strategies report illustrates how IBM Tivoli Storage
 Manager's architecture is better suited to handle these challenges in
 managing business data.

 We found that IBM Tivoli Storage Manager provides a superior solution
 to all the most critical challenges we considered, says Norris. The
 foresight IBM Tivoli 

Re: check on allowed scratch tapes!

2002-11-08 Thread Mr. Lindsay Morris
One of our Servergraph monitors does, essentially, this:
dsmadmc ... q libvol | grep Scratch | wc -l
to spit out how many scratch tapes are in use.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Chetan H. Ravnikar
 Sent: Friday, November 08, 2002 1:23 AM
 To: [EMAIL PROTECTED]
 Subject: check on allowed scratch tapes!


 Hello
 I am often caught with my customers complaing, that the client are
 sometimes failing with error server out of storage space!

 What I am looking for is, the value which is set in a storage pool
   *Max allowed scratch tapes*.
 We have 250 Maximum Scratch Volumes Allowed on OS_TAPEPOOL_SERVER (onsite
 pool), when the numbers starts getting close to 245, a mail gets sent out!

 Any select statement statement which gives that value!? I can script
 something.. quickly

 thanks for the help
 Chetan




Re: Multiple sessions per tape

2002-10-31 Thread Mr. Lindsay Morris
1. hundreds of clients can simultaneously back up to a disk pool.  Only four
clients can back up simultaneously to a library with four tape drives.

2. Tape drives perform horribly if they stop and start frequently.  If you
can stream the data to them fast enough to keep them busy ,they perform very
well.  One client going over a network cannot keep up with a tape drive, so
ithe drive stops and starts and perfomrs badly.  But weh nthe TSM server
migrates the disk pool to tape, it WILL feed the tape drive fast enough to
get good performance out of it.

3. If you cache your disk pool, next-day restores can come directly from
disk - much faster than tape.

Hope this helps.
-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Conko, Steven
 Sent: Thursday, October 31, 2002 8:34 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Multiple sessions per tape


 why does having the diskpool in between improve performance?

 -Original Message-
 From: Alexander Lazarevich [mailto:alazarev;HERA.ITG.UIUC.EDU]
 Sent: Wednesday, October 30, 2002 4:35 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Multiple sessions per tape


 Totally. We used to backup to tape. But since putting a disk spool
 inbetween, the performance has significantly improved.

 Alex
 ------
Alex Lazarevich | Systems | Imaging Technology Group
[EMAIL PROTECTED] | (217)244-1565 | www.itg.uiuc.edu
 ------


 On Wed, 30 Oct 2002, Joshua Bassi wrote:

  It is restricted to one.  The best way to backup your data is to a disk
  pool first, then bleed the diskpool to tape and you will get much better
  performance out of your single session than you would if you backed up
  directly to tape.
 
 
  --
  Joshua S. Bassi
  IBM Certified - AIX 4/5L, SAN, Shark
  Tivoli Certified Consultant - ADSM/TSM
  eServer Systems Expert -pSeries HACMP
 
  AIX, HACMP, Storage, TSM Consultant
  Cell (831) 595-3962
  [EMAIL PROTECTED]
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU] On Behalf Of
  Conko, Steven
  Sent: Wednesday, October 30, 2002 10:16 AM
  To: [EMAIL PROTECTED]
  Subject: Multiple sessions per tape
 
  We're running ADSM 4.2 on AIX 4.3.3, 3494 library, 3590 drives.
 
  Is there any way to control the number of sessions per tape or is it
  restricted to one?
 
  Steven A. Conko
  Senior Unix Systems Administrator
  ADT Security Services, Inc.
 




Re: Computer Associates - Brightstore Resource Manager for TSM

2002-10-29 Thread Mr. Lindsay Morris
CA seems to buy tools / companies and market them like crazy, but not put
any energy into improving or supporting them.

I'd be interested in hearing your take on what features this product has,
and what it costs.
-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Jim Taylor
 Sent: Tuesday, October 29, 2002 1:27 PM
 To: [EMAIL PROTECTED]
 Subject: Computer Associates - Brightstore Resource Manager for TSM


 Anyone out there heard of this CA product and have any experience with it.
 We have just been bought out by a large company that has bought into CA
 tools, big time.  They want me to look at this CA component that
 was written
 for TSM.  My question is ' will it provide me with anything of value'?

 Skeptically yours,

 Jim Taylor
 Technology Services
 Enlogix
 *  E-mail: [EMAIL PROTECTED]
 *  Office: (416) 496-5266
 * Cell:  (416)458-6802
 *   Fax: (416) 496-5245




Re: Very long backup/So many files

2002-10-25 Thread Mr. Lindsay Morris
Check the accounting logs - use our tool if you want at
http://www.servergraph.com/techtip3.htm.

I bet you see a huge amount of idle wait. What may be happening is that the
directory stucture on the client is very deep, and the client is taking a
very long time to walk the tree - 4 million objects inspected! - and
IDLETIMEOUT is set too short (default 60 sec I think - most people set this
longer - on the TSM server as I recall) so that's why the node drops out a
couple of times a night.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Gill, Geoffrey L.
 Sent: Friday, October 25, 2002 10:05 AM
 To: [EMAIL PROTECTED]
 Subject: Very long backup/So many files


 Does anyone think a computer that has this many files on it, that only
 backed up 12,222 files, should take over 10 hours to complete?
 I'm having a
 problem with this node dropping out a couple of times a night for
 being idle
 for more than 60 minutes too.

 We've double checked the NIC and switch port for the proper
 settings. When I
 looked at it last night the CPU wasn't doing anything.

 10/25/02 06:13:04 ANE4952I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects inspected: 4,151,721

 10/25/02 06:13:04 ANE4954I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects backed up:   12,222

 10/25/02 06:13:04 ANE4958I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects updated:  0

 10/25/02 06:13:04 ANE4960I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects rebound:  0

 10/25/02 06:13:04 ANE4957I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects deleted:  0

 10/25/02 06:13:04 ANE4970I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects expired:386

 10/25/02 06:13:04 ANE4959I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects failed:   0

 10/25/02 06:13:04 ANE4961I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of bytes transferred: 1.15 GB

 10/25/02 06:13:04 ANE4963I (Session: 3357, Node: CP-ITS-DCMECPD)  Data

transfer time:   73.02 sec

 10/25/02 06:13:04 ANE4966I (Session: 3357, Node: CP-ITS-DCMECPD)
 Network
data transfer rate:16,571.01 KB/sec

 10/25/02 06:13:04 ANE4967I (Session: 3357, Node: CP-ITS-DCMECPD)
 Aggregate
data transfer rate: 32.91 KB/sec

 10/25/02 06:13:04 ANE4968I (Session: 3357, Node: CP-ITS-DCMECPD)
 Objects
compressed by:   19%%

 10/25/02 06:13:04 ANE4964I (Session: 3357, Node: CP-ITS-DCMECPD)
 Elapsed
processing time:10:12:51


 Geoff Gill
 TSM Administrator
 NT Systems Support Engineer
 SAIC
 E-Mail:mailto:gillg;saic.com [EMAIL PROTECTED]
 Phone:  (858) 826-4062
 Pager:   (877) 905-7154




Re: Very long backup/So many files

2002-10-25 Thread Mr. Lindsay Morris
No, this is the recent (well, six months old) Journal-based backup
eature  - only works on Windows clients so far.
See your new client manual.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Justin Case
 Sent: Friday, October 25, 2002 10:41 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Very long backup/So many files


 How do we turn on Journaling service ?

 Is this the storage agents ?

 Thanks
 Justin Case
 Duke University
 Durham NC





 Miller, Ryan [EMAIL PROTECTED]@VM.MARIST.EDU on 10/25/2002
 10:16:52 AM

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]


 To:[EMAIL PROTECTED]
 cc:

 Subject:Re: Very long backup/So many files


 Very easily!  The problem is not that you are only backing up
 12,222 files,
 but unless you are using the Journaling service, TSM has to look at each
 one of those 4,000,000+ files to see if it needs to back it up or
 not.  The
 10 hours is being spent doing this.  We have many clients in this
 situation, when we implement the Journaling service, we take this 10 - 12
 hour back ups and make them 5 - 10 minutes!  I would be concerned with
 losing connection though, it could just be that the client is taking so
 long to process at its end that it will not talk to the TSM server often
 enough.  I would try out Journaling and see how that helps.

 Ryan Miller

 Principal Financial Group

 Tivoli Certified Consultant
 Tivoli Storage Manager v4.1


 -Original Message-
 From: Gill, Geoffrey L. [mailto:GEOFFREY.L.GILL;SAIC.COM]
 Sent: Friday, October 25, 2002 9:05 AM
 To: [EMAIL PROTECTED]
 Subject: Very long backup/So many files


 Does anyone think a computer that has this many files on it, that only
 backed up 12,222 files, should take over 10 hours to complete?
 I'm having a
 problem with this node dropping out a couple of times a night for being
 idle
 for more than 60 minutes too.

 We've double checked the NIC and switch port for the proper settings. When
 I
 looked at it last night the CPU wasn't doing anything.

 10/25/02 06:13:04 ANE4952I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects inspected: 4,151,721

 10/25/02 06:13:04 ANE4954I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects backed up:   12,222

 10/25/02 06:13:04 ANE4958I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects updated:  0

 10/25/02 06:13:04 ANE4960I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects rebound:  0

 10/25/02 06:13:04 ANE4957I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects deleted:  0

 10/25/02 06:13:04 ANE4970I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects expired:386

 10/25/02 06:13:04 ANE4959I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects failed:   0

 10/25/02 06:13:04 ANE4961I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of bytes transferred: 1.15 GB

 10/25/02 06:13:04 ANE4963I (Session: 3357, Node: CP-ITS-DCMECPD)  Data

transfer time:   73.02 sec

 10/25/02 06:13:04 ANE4966I (Session: 3357, Node: CP-ITS-DCMECPD)
 Network
data transfer rate:16,571.01 KB/sec

 10/25/02 06:13:04 ANE4967I (Session: 3357, Node: CP-ITS-DCMECPD)
 Aggregate
data transfer rate: 32.91 KB/sec

 10/25/02 06:13:04 ANE4968I (Session: 3357, Node: CP-ITS-DCMECPD)
 Objects
compressed by:   19%%

 10/25/02 06:13:04 ANE4964I (Session: 3357, Node: CP-ITS-DCMECPD)
 Elapsed
processing time:10:12:51


 Geoff Gill
 TSM Administrator
 NT Systems Support Engineer
 SAIC
 E-Mail:mailto:gillg;saic.com [EMAIL PROTECTED]
 Phone:  (858) 826-4062
  Pager:   (877) 905-7154




Re: Very long backup/So many files

2002-10-25 Thread Mr. Lindsay Morris
Gee, I'd disagree...
this example has a small percentage: 12,000 files backed up out of 4
million - but it still took a long time because it had to walk through 4
million files to FIND those 12,000 that needed to be backed up.

The walk-through-4-million is exactly what the Journal-based backup
prevents.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Gianluca Mariani1
 Sent: Friday, October 25, 2002 11:23 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Very long backup/So many files


 if the percentage of changed files to be backed up from a client is 5%
 then traditional incremental is preferrable.
 no other big discriminating points.

 Cordiali saluti
 Gianluca Mariani
 Tivoli TSM Global Response Team, Roma
 Via Sciangai 53, Roma
  phones : +39(0)659664598
+393351270554 (mobile)
 [EMAIL PROTECTED]
 --
 --

 The people of Krikkit,are, well, you know, they're just a bunch of real
 sweet guys, you know, who just happen to want to kill everybody. Hell, I
 feel the same way some mornings...



  Jon Evans
  Jon.Evans@HALL
  IBURTON.COM   To
  Sent by: ADSM:[EMAIL PROTECTED]
  Dist Stor  cc
  Manager
  [EMAIL PROTECTED]   bcc
  ST.EDU
Subject
 Re: Very long backup/So many files
  25/10/2002
  16.54


  Please respond
  to ADSM: Dist
   Stor Manager






 Can anyone please explain the disadvantages of using journal-based backup?

 -Original Message-
 From: Gianluca Mariani1 [mailto:gianluca_mariani;IT.IBM.COM]
 Sent: 25 October 2002 15:47
 To: [EMAIL PROTECTED]
 Subject: Re: Very long backup/So many files

 what platform are you running(OS on client and server)?
 code levels(client  server)?
 is journaled backup in use?
 how is the client connected to the server (what network)?

 Cordiali saluti
 Gianluca Mariani
 Tivoli TSM Global Response Team, Roma
 Via Sciangai 53, Roma
  phones : +39(0)659664598
+393351270554 (mobile)
 [EMAIL PROTECTED]
 --
 --

 

 The people of Krikkit,are, well, you know, they're just a bunch of real
 sweet guys, you know, who just happen to want to kill everybody. Hell, I
 feel the same way some mornings...



  Gill, Geoffrey
  L.
  GEOFFREY.L.GILTo
  [EMAIL PROTECTED] [EMAIL PROTECTED]
  Sent by: ADSM:cc
  Dist Stor
  Manager  bcc
  [EMAIL PROTECTED]
  ST.EDU   Subject
  Very long backup/So many files

  25/10/2002
  16:04


  Please respond
  to ADSM: Dist
   Stor Manager






 Does anyone think a computer that has this many files on it, that only
 backed up 12,222 files, should take over 10 hours to complete?
 I'm having a
 problem with this node dropping out a couple of times a night for being
 idle
 for more than 60 minutes too.

 We've double checked the NIC and switch port for the proper settings. When
 I
 looked at it last night the CPU wasn't doing anything.

 10/25/02 06:13:04 ANE4952I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects inspected: 4,151,721

 10/25/02 06:13:04 ANE4954I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects backed up:   12,222

 10/25/02 06:13:04 ANE4958I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects updated:  0

 10/25/02 06:13:04 ANE4960I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects rebound:  0

 10/25/02 06:13:04 ANE4957I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects deleted:  0

 10/25/02 06:13:04 ANE4970I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects expired:386

 10/25/02 06:13:04 ANE4959I (Session: 3357, Node:
 CP-ITS-DCMECPD)  Total

number of objects failed:   0

 10/25/02 06

Re: drm operator scripts

2002-10-25 Thread Mr. Lindsay Morris
You know, it occurs to me that the AutoVault product may do what you want
here
check the adsm.org banners for their site (maybe www.coderelief.com ?)

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Matt Simpson
 Sent: Friday, October 25, 2002 12:03 PM
 To: [EMAIL PROTECTED]
 Subject: Re: drm operator scripts


 At 11:32 AM -0400 10/25/02, Nelson, Doug wrote:
 We move tapes from VaultRetrieve to CourierRetrieve and then move
 them (individually) from CourierRetrieve to OnSiteRetrieve once we
 have verified that we have received them. It only takes a minute
 (literally).

 I don't like the process of invidually moving them to OnSiteRetrieve,
 with or without an intermediate stop at CourierRetrieve.  Computers
 are supposed to eliminate manual processes.

 I guess I'm spoiled by coming from a world where Tape Management
 Systems remember that a tape exists unless they're specifically told
 that it no longer exists.  TSM should be smart enough to checkin a
 tape that's in CourierRetrieve status, and say Oh .. I guess it's
 back now.  If it could do that, then if a tape got moved to
 CourierRetrieve and did not get checked back in,  the fact that it
 was remaining in CourierRetrieve status would be a red flag.  Moving
 them to OnsiteRetrieve, which disappears them, just seems like an
 unnecessary and dangerous step.

 --


 Matt Simpson --  OS/390 Support
 219 McVey Hall  -- (859) 257-2900 x300
 University Of Kentucky, Lexington, KY 40506
 mailto:msimpson;uky.edu
 mainframe --   An obsolete device still used by thousands of obsolete
 companies serving billions of obsolete customers and making huge obsolete
 profits for their obsolete shareholders.  And this year's run
 twice as fast
 as last year's.




Re: Client Statistics In Act Log

2002-10-24 Thread Mr. Lindsay Morris
Take those client statistics with a grain of salt, Rod.
If you can use the accounting log data, that's a lot more reliable.
We have a helper script for you here:
http://www.servergraph.com/techtip3.htm

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 rh
 Sent: Thursday, October 24, 2002 1:40 PM
 To: [EMAIL PROTECTED]
 Subject: Client Statistics In Act Log


 I've noticed that I'm not seeing backup statistics for
 my clients (ie. messages ANE4958I-ANE4964I) recorded
 in the activity log at event completion. Some clients
 seem to do it, others do not. Some of the
 backups/archives are run from the TSM scheduler, some
 are run from an external scheduler, some are run
 manually from the command and GUI interfaces. Should I
 always get these summary messages? I don't have any
 messages disabled for the console. Is there some
 option that controls this logging? Anyone know if
 there is a known problem concerning this? I'd like to
 assure consistency in showing these statistics so they
 can be extracted from the activity log for reporting.
 My server is 4.2.2 and my clients are 4.2.1.0 and
 higher.
 Thanks for your help?

 Rod Hroblak
 ADP

 __
 Do you Yahoo!?
 Y! Web Hosting - Let the expert host your web site
 http://webhosting.yahoo.com/




Re: dismiss dsmadmc header output

2002-10-22 Thread Mr. Lindsay Morris
See http://www.servergraph.com/techtip.shtml

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Michael Kindermann
 Sent: Tuesday, October 22, 2002 6:33 AM
 To: [EMAIL PROTECTED]
 Subject: dismiss dsmadmc header output


 Hello,
 find this question once in the list, but didn't find any answer.
 Is there a way, something like a switch or an option, to influence the
 dsmadmc-output, to give only the interesting result and no overhead ?

 Trying to scripting some task in a shell-script. And iam a little anoyed,
 becaus it
 not very difficult to get some output from the dsmserver. But it is  to
 reuse the information in the script.
 For example:
 I want to remove a Node, so i have first to delete the filespace. I also
 have to del the association. Iam afraid to use wildcards like
 'del filespace
 node_name * ' in a script , so i need the filespacenames.
 I make an dsmadmc -id=... -pa=...  q filespace node_name * or q select
 filspacename from filespaces.
 All i need is the name, but i get a lot of serverinformation:

 Tivoli Storage Manager
 Command Line Administrative Interface - Version 4, Release 1, Level 2.0
 (C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved.

 Session established with server ADSM: AIX-RS/6000
   Server Version 4, Release 2, Level 2.7
   Server date/time: 10/22/2002 11:34:44  Last access: 10/22/2002 11:26:01

 ANS8000I Server command: 'q node TSTW2K'

 Node Name Platform Policy Domain  Days Since
 Days Since Locked?
Name   Last Acce-
   Password
   ss
Set
 -  -- --
 -- ---
 TSTW2KWinNTSTANDARD  277
278   No

 ANS8002I Highest return code was 0.

 Greetings

 Michael Kindermann
 Wuerzburg / Germany



 --
 +++ GMX - Mail, Messaging  more  http://www.gmx.net +++
 NEU: Mit GMX ins Internet. Rund um die Uhr f|r 1 ct/ Min. surfen!




Re: SMF Records

2002-10-21 Thread Mr. Lindsay Morris
I thought it only wrote SMF records (type 41) for session accounting, that
is, at end-of-node-session.
We have some JCL that should work, perhaps with some tweaking, to get you
those records.

If you DO get some SMF output, would you please send me a few hundred
records of it, so that we can test our convert-to-normal-dsmaccnt.log
program?

Thanks.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax

 Here's the JCL - USE AT OWN RISK
===
Creating the Statistics Event Adapter JCL
To create the Statistics Event Adapter JCL, copy and modify the SMF TECAD
JCL in
the sample data set. Save the modified JCL as the TECAD PROC member name in
the
PROCLIB data set. Here is an example of the SMF TECAD JCL:
//TECADZ   PROC CFG=TAMQZCFG MEMBER NAME FOR CONFIG
//*
//* LICENSED MATERIALS - PROPERTY OF IBM
//* 5698-MQS (C) COPYRIGHT TIVOLI SYSTEMS 2001
//* ALL RIGHTS RESERVED.
//*
//* US GOVERNMENT USERS RESTRICTED RIGHTS
//* - USE, DUPLICATION OR DISCLOSURE RESTRICTED BY
//*   GSA ADP SCHEDULE CONTRACT WITH IBM CORPORATION.
//*
//* TIVOLI EVENT ADAPTER FOR MQSERIES STATISTICS AND
//* ACCOUNTING
//*
//TECADAPT EXEC PGM=IHSMTECZ,
// ACCT=ACCT,
// TIME=1439,
// REGION=64M
//*
//STEPLIB  DD DISP=SHR,DSN=hlq.v2r4m0.SIHSMODM
// DD DISP=SHR,DSN=your.le370.SEDCLINK
//SYSUDUMP DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSTERM  DD SYSOUT=*
//SYSLOG   DD SYSOUT=*
//EVTRACE  DD SYSOUT=*
//TECLOG   DD SYSOUT=*
//SMFDDDD DISP=SHR,DSN=YOUR.SMF.DATASET
//TECADCFG DD DISP=SHR,DSN=hlq.v2r4m0.SIHSMSAM(CFG.)
//SYSTCPD  DD DISP=SHR,DSN=your.tcp.INIT(TCPDATA)
//*

To modify the SMF TECAD JCL, follow these steps:
  Specify the TCP/IP product in a member of the partitioned data set on the
  SYSTCPD DD statement. For IBM TCP/IP, set the TCP=systemname parameter to
the
  started task name of the IBM TCP/IP product, such as TCPIP. The value of
  systemname is the name of a TCP/IP stack.
  In the STEPLIB statements, replace hlq.v2r4m0.SIHSMODM with the data set
name
  of the load library that Tivoli Manager for MQSeries for OS/390 provides.
  In the TECADCFG statement, replace hlq.v2r4mo.SIHSMSAM with the data set
name
  of the data set that contains the Statistics Event Adapter configuration
file.

  Before you start this job, review the dump, trace, and log settings:
SYSUDUMP
Displays a system-invoked dump
SYSPRINT
Displays a job log of messages
SYSTERM
Compiles a log of runtime messages
EVTRACE
Compiles a log of events for diagnostic purposes when TestMode is set to
YES

TECLOG
Compiles a log of Statistics Event Adapter trace messages


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Joshua Bassi
 Sent: Monday, October 21, 2002 7:10 PM
 To: [EMAIL PROTECTED]
 Subject: SMF Records


 All,

 Does TSM write to SMF records during startup on OS/390?  If so, which
 SMF records does it right to?

 --
 Joshua S. Bassi
 IBM Certified - AIX 4/5L, SAN, Shark
 Tivoli Certified Consultant - ADSM/TSM
 eServer Systems Expert -pSeries HACMP
 AIX, HACMP, Storage, TSM Consultant
 Cell (831) 595-3962
 [EMAIL PROTECTED]




Re: Veritas Enterprise Netbackup 4.5

2002-10-18 Thread Mr. Lindsay Morris
1.  Not 100% sure, but Veritas dominates the market because they give it
away, right?
I mean, don't they bundle it with Solaris or some other OS, so when they
count their installed base, they count all the Sun boxes whether people are
using it on those boxes or not?

Hope I'm not just disseminating rumor...

2. No other backup product is as robust and well-designed as TSM.  But the
features and efficient use of resources make it complex, so some people may
prefer simplicity and the fun point-and-click interface that come with
Veritas.

3. On a day-one comparison, Veritas does perform better than TSM, because
they BOTH have to do a full backup on day one.  But after a week or a month
has passed, TSM's efficiencies come to light.

TSM resellers have access to a lot of competitive whitepapers - ask your TSM
reseller to dig into the Tivoli site for you.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com http://www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Kelvin Tan
 Sent: Friday, October 18, 2002 1:29 AM
 To: [EMAIL PROTECTED]
 Subject: Veritas Enterprise Netbackup 4.5


 Hi Gurus

 At the moment we are using TSM 4.2 but the management is looking
 at Veritas Enterprise Netbackup and I have to convince them about
 staying put with TSM.  Has anybody move from TSM to Veritas
 Enterprise Netbackup 4.5 running on Unix??   Any advantages
 Veritas has over TSM?  If TSM is that good how come it does not
 dominate the backup market?  I have read the report of Veritas vs
 TSM but that is on Intel and related to version 3.

 Hope somebody can enlightened me.

 Thanks.

 Kelvin Tan


 This message and any attachment to it is intended for the use of
 the individual or entity to whom it is addressed by the first
 sender and contains information which may be confidential and/or
 privileged.

 If you receive this message and any attachment in error, please
 delete it immediately and notify the sender by electronic mail or
 telephone (61 2) 9211 0188. Unless you have been expressly
 authorised by the sender, you are prohibited from copying,
 distributing or using the information contained in this message
 and any attachment.

 Tab Limited (ABN 17 081 765 308) is not responsible for any
 changes made to this message or any attachment other than those
 made by Tab Limited, or for the effect of changes made by others
 on the meaning of this message and any attachment.

 Tab Limited does not represent that any attachment is free from
 computer viruses or defects and the user assumes all
 responsibility for any loss, damage or consequence resulting
 directly or indirectly from the use of any attachment.




Re: how many active session?

2002-10-17 Thread Mr. Lindsay Morris
One way to do this is to take periodic query session readings and count
them.
A better way (ours) is to analyze the activity log for session-start and
session-end messages (not as easy as it sounds, but do-able).
This is better because:
1. It doesn't hammer your TSM server with queries
2. It sees all the fine detail of session activity.  When a TSM schedule
fires, you may have 50 nodes in session - but only for a minute.  60 seconds
later 8 of them are done; two minutes later 25 of the are done.  So there's
a spike of activity that you'd miss, if you only took a reading once every
five minuntes.

What good is this detail?  Well, when you look at a night's worth, you might
see that there's a spike at 8PM, which is mostly done by 8:30.  Then there's
another spike at 10PM.  But from 8:30 to 10:00 PM, nothing much is
happening.  So you can see that it might be smart to move your 10 PM job up
to 8:30, and squeeze your backups into a smaller window.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
 Niklas Lundstrom
 Sent: Thursday, October 17, 2002 2:42 AM
 To: [EMAIL PROTECTED]
 Subject: Re: how many active session?


 Hello

 The TSM Manager can provide you with this and much more. There is a free
 30day  trialversion to download at www.tsmmanager.com I think the
 product is
 very good

 Regards
 Niklas

 -Original Message-
 From: MC Matt Cooper (2838) [mailto:Matt.Cooper;AMGREETINGS.COM]
 Sent: den 16 oktober 2002 13:31
 To: [EMAIL PROTECTED]
 Subject: how many active session?


 Does anyone know of a way to track the number of active sessions
 overnight.
 I know you can schedule re-occurring QUERY SESSIONs and then
 count them.   I
 want to just get the number of active sessions say every 15, 30 or 60
 minutes.   I have to find the best place to move some backups but I don't
 have way of tracking this.

 Would that TSM MANGER product provide this?
 Thanks
 Matt




Re: TSM Reporting :: Forward events to Tivoli TEC console

2002-10-16 Thread Mr. Lindsay Morris

Some of our customers point out that there are about 3,000 TSM messages that
you have to decide to ignore, or write a TEC rule for.  Ouch!

You can try to get around that by just using the Severe/Error/Warning
indicators (the letter after ANRnnn) - but if you look, you'll see that a
lot of things marked Warning are really pretty serious - like:
ANR4581W Database backup/restore terminated - internal server error
detected.
ANR1023W Migration process process ID terminated for storage pool storage
pool name - excessive write errors encountered

In fact, most of the daily-job-failed messages are Warnings.

In developing Servergraph, we took a very long hard look at all the messages
and boiled them down to about 100 that really matter. So our TEC-happy
customers use our built-in alerting to feed TEC - makes it easier to set up.

And our NON-TEC-happy customers use our built-in alerting, which does (as
near as I can tell) everything TEC does.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM.ORG [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, October 16, 2002 2:44 PM
 To: [EMAIL PROTECTED]
 Subject: TSM - Tivoli Storage Manager :: TSM Reporting :: Forward events
 to Tivoli TEC console


 on ADSM.ORG forums
 TSM - Tivoli Storage Manager
 :: TSM Reporting
 ::.. Forward events to Tivoli TEC console

 fccpol wrote at Oct 16, 2002 - 02:43 PM
 -
 Anyone have any experience with forwarding events to TEC? Just
 want to know if there are any issues that I should be aware of
 before installing.
 -

 Reply to this message:
 http://my.adsm.org/modules.php?op=modloadname=phpBB_14file=index
 action=replytopic=114forum=11

 Browse thread:
 http://my.adsm.org/modules.php?op=modloadname=phpBB_14file=index
 action=viewtopictopic=114


 You are receiving this Email because you are subscribed to be
 notified of events in http://my.adsm.org/ forums.




Re: Web errors

2002-10-16 Thread Mr. Lindsay Morris

Many of our customers see them and ignore them.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 J D Gable
 Sent: Wednesday, October 16, 2002 2:11 PM
 To: [EMAIL PROTECTED]
 Subject: Web errors


 Hello all,

 Just curious if anyone else has seen the following errors, and if they
 have what they did to resolve them.  We are running 5.1.1.6 on AIX
 4.3.3.  Any assistance would be greatly appreciated.

 Thanks,
 Josh

 ANR4706W Unable to open file WebConsoleBeanInfo.class to satisfy web
 session
 58.
 ANR4706W Unable to open file WebConsole$COMClassObject.class to satisfy
 web
 session 59.
 ANR4706W Unable to open file CommandLine$COMClassObject.class to satisfy
 web
 session 60.





Re: Event Status mail

2002-10-03 Thread Mr. Lindsay Morris

I just have to say in response to all this interest in query events ... it
has problems.

You can have an event that says Completed, but you didn't get a backup
(the TDP script started, but then failed later.)
You can have an event that says Missed, but you DID get a backup
(operations or the user kicked it off manually).
You can have lots of nodes that don't even HAVE events (they're scheduled by
cron, or Autosys, etc., not by TSM)

So we use the filespace's backup-complete date to see whether all filespaces
are up to date on their backups.
This is harder to do, but it has a VERY USEFUL side effect: it finds wasted
space.

If you look at all your backup-complete dates, you'll find some filespaces
that are VERY OLD, i.e., haven't been backed up in a year.
These are usually junk - like, the Oracle database you moved from node A to
node B a year ago.

YOU deleted the 200 GB of old filespace on node A...
...but TSM never did, and it's still sitting on the last version of that
200GB worth of files.

We had one customer find almost FOUR TERABYTES of wasted space in their
overloaded library, that they didn't know they were sitting on.
I bet other people have this situation too.   If you're running out of space
in your library, take a look at this possibility.


Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



Re: Servergraph TSM on solaris-2.8

2002-09-24 Thread Mr. Lindsay Morris

For the record - we're innocent!
dsmadmc -console sometimes crashes this release of TSM on Solaris.
We are unrepentant in our use of dsmadmc -console.
We have Servergraph installed at maybe 100 other sites, and have never seen
this happen before.

TSM says it's a known problem: I quote from Tivoli support:
 It appears that one of these console sessions was starting, but
encountered an error
 causing it to be closed. Somehow when we attempt to close the session
 during this sequence of events, an invalid memory reference is made
 causing the server to core dump.

 This appears to be a defect in the TSM Server, and I will be pursuing
 this with development.

Just for the record.
-
Mr. Lindsay Morris
Lead Architect, Servergraph
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Chetan H. Ravnikar
 Sent: Tuesday, September 24, 2002 1:01 PM
 To: [EMAIL PROTECTED]
 Subject: Servergraph  TSM on solaris-2.8


 Hi all,

 just an Fyi.

 not sure if anyone else have experienced this, We have TSM server 4.2.20
 running on sol-2.8 and we were trying to eval servergraph,

 the servergraph process started killing the TSM server leading to core
 dumps! we have a PMR open with Tivoli and have updated servergraph with
 the issue as well.

 Apparently IBM said that this is a bug on the 4.2.20 that when contacted
 with multiple *dsmadmc -console* processes to the server has lead it to go
 down.

 Both the parties seem to be looking into this

 -C




Re: how can i clear up the content of a diskpool?

2002-09-20 Thread Mr. Lindsay Morris

Yes, but setting hi=0 causes migration to stop and start repeatedly if a
user does a manual backup (say).
We do
update stgpool stgpool_name hi=100 lo=0
update stgpool stgpool_name hi=5 lo=0
setting it to 100 first, then down to 5, forces migration more reliably in
some cases.

Or you could set hi=0 momentarily, then hi=90 - migration will be forced by
the first one and continue even though you immediately set hi=90.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Seay, Paul
 Sent: Friday, September 20, 2002 10:07 AM
 To: [EMAIL PROTECTED]
 Subject: Re: how can i clear up the content of a diskpool?


 Update stgpool stgpool_name hi=0 lo=0

 This will get the whole pool.

 Migration will automatically start and empty the volume.

 A move data command will also do it if you just want the data
 from that one
 volume.

 Move data volume_name

 Yes, it works against primary storage pool volumes.  But unless
 you set the
 volume to readonly it will start filling back up again.

 Paul D. Seay, Jr.
 Technical Specialist
 Naptheon Inc.
 757-688-8180


 -Original Message-
 From: fenglimian [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, September 19, 2002 11:54 PM
 To: [EMAIL PROTECTED]
 Subject: Re: how can i clear up the content of a diskpool?


 migrate it to zero
 becase i don't want to delete the diskpool volume ,but the content in the
 diskpool are useless. thanks


 When you say clear it up, AUDIT it or Migrate it to Zero?
 
 Paul D. Seay, Jr.
 Technical Specialist
 Naptheon Inc.
 757-688-8180
 
 
 -Original Message-
 From: fenglimian [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, September 19, 2002 10:24 PM
 To: [EMAIL PROTECTED]
 Subject: how can i clear up the content of a diskpool?
 
 
 what is the operation?
 thanks




Re: Backup reporting: SUMMARY TABLE ISSUE IS FOUND

2002-09-20 Thread Mr. Lindsay Morris

For the record, Servergraph/TSM doesn't use the summary table, so problems
there won't make it give incorrect results.
It uses the accouting logs which have always been reliable.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Bill Boyer
 Sent: Friday, September 20, 2002 10:17 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Backup reporting: SUMMARY TABLE ISSUE IS FOUND


 Yeah, but this used to work just fine. If we're assuming that the
 problem is
 that the multiple backup sessions (RESOURCEUTILIZATION) is causing the
 error, then why did it work after this came out in the V3.7? It
 only broke
 sometime in the late 4.1/4.2 and 5.1 versions of the server. If it used to
 work, and now doesn't,...why is it so hard to fix it?? So many of us
 developed scripts/code and in the case of Lindsay Morris, commercial
 software (ServerGraph/TSM), that is not working, giving incorrect results,
 and making us look bad in the eye of our management.

 Bill

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Joel Fuhrman
 Sent: Friday, September 20, 2002 5:13 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Backup reporting: SUMMARY TABLE ISSUE IS FOUND


 To me is seems as though the worker sessions should be informing
 the control
 session as to the amount of work they have done.  Each time the worker
 reports in, the control session would send a heart beat to the server thus
 avoiding this annoying timeout.

 On Wed, 18 Sep 2002, Seay, Paul wrote:

  The problem is the summary table is broken right now.  I just
 got off the
  phone with Level 2.  We think we have identified the cause of
 the problem.
  As it turns out the SUMMARY table records are written by the control
  session, not the actual backup sessions.  What is happening is
 the control
  sessions are timing out and a new control session is being
 formed to send
  the statistics information.  If you look at the start and end timestamps
  with the following select and then look at the activity log you will see
  what I am talking about.  This select lists all sessions that
 have a byte
  count of zero.  You will notice the start and end timestamps are usually
 the
  same or only a few seconds apart.  The reason is the wrong numbers are
 being
  recorded.
 
  select entity, number, start_time, end_time from summary where
 bytes=0 and
  start_time current_timestamp-5 days and activity = 'BACKUP'
 order by 1,3
 
  The issue is I can envision this is going to be very difficult to fix in
 the
  current design.
 
  It will be interesting.
 
 
 
  Paul D. Seay, Jr.
  Technical Specialist
  Naptheon Inc.
  757-688-8180
 
 
  -Original Message-
  From: Mark Bertrand [mailto:[EMAIL PROTECTED]]
  Sent: Wednesday, September 18, 2002 2:39 PM
  To: [EMAIL PROTECTED]
  Subject: Re: Backup reporting
 
 
  Paul and all,
 
  When I attempt to use any of the following select statements my
 Total MB
  returned is always 0. I get my list of nodes but there is never any
 numbers
  for size.
 
  Since this is my first attempt at select statements, I am sure I doing
  something wrong. I have tried from command line and through macro's.
 
  I am trying this on a W2K TSM v4.2.2 server.
 
  Thanks,
  Mark B.
 
  -Original Message-
  From: Seay, Paul [mailto:[EMAIL PROTECTED]]
  Sent: Monday, September 16, 2002 11:43 PM
  To: [EMAIL PROTECTED]
  Subject: Re: Backup reporting
 
 
  See if these will help:
 
  /* SQL Script:   */
  /*   */
  /* backup_volume_last_24hours.sql*/
  /* Date   Description*/
  /* 2002-06-10 PDS Created*/
 
  /* Create Report of total MBs per each session */
 
  select entity as Node Name  , cast(bytes/1024/1024 as
 decimal(10,3))
  as Total MB  ,  cast(substr(cast(end_time-start_time as char(17)),3,8)
 as
  char(8)) as Elapsed  , substr(cast(start_time as  char(26)),1,19) as
  Date/Time  , case when cast((end_time-start_time)
 seconds as
  decimal) 0 then  cast(bytes/cast((end_time-start_time) seconds as
  decimal)/1024/1024 as decimal(6,3)) else cast(0 as
 decimal(6,3)) end as  
  MB/Sec from summary where start_time=current_timestamp - 1 day and
  activity='BACKUP'
 
  /* Create Report of total MBs and length of backup for each node */
 
  select entity as Node Name  , cast(sum(bytes/1024/1024) as
  decimal(10,3)) as Total MB,  substr(cast(min(start_time) as
  char(26)),1,19) as Date/Time   ,
  cast(substr(cast(max(end_time)-min(start_time)  as char(20)),3,8) as
  char(8)) as Lengthfrom summary where
 start_time=current_timestamp -
  22 hours and  activity='BACKUP' group

Re: SQL to find # of mounts for tapes?

2002-09-20 Thread Mr. Lindsay Morris

We do someting similar, but we analyze all of the 50-or-so messages that
pertain to drive mounts, and come up with the percent of time mounted during
the day.  That feeds into capacity planning - you can predict, with a wekk
or two of history, when your library is going to be 100% utilized.


-
Mr. Lindsay Morris
Lead Architect, Servergraph/TSM
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Mahesh Tailor
 Sent: Friday, September 20, 2002 2:11 PM
 To: [EMAIL PROTECTED]
 Subject: Re: SQL to find # of mounts for tapes?


 Here's what I do from a ksh script . . .

 /usr/bin/dsmadmc -id=myuid -pass=mypasswd q actlog msgno=8337
 begint=00:00 begind=-1 endd=-1 endt=23:59 | \
/usr/bin/grep -c ANR8337

 This gives me a count of the number of tape mounts yesterday.  I
 execute this script, log the output to a file, and then generate a graph
 using gnuplot to show the drive mounts over time.

 Mahesh

  [EMAIL PROTECTED] 09/20/02 01:36PM 
 The VOLUMES table has the information so long as the tape does not go
 scratch.  Once it scratches it and reenters the counters are all reset.
  Do
 this Select to see what is there.  The accounting log may have
 information
 that is permanently kept, but you will have to write your own code to
 process it.

 SELECT * from volumes where volume_name='vv'

 Paul D. Seay, Jr.
 Technical Specialist
 Naptheon Inc.
 757-688-8180


 -Original Message-
 From: Walker, Thomas [mailto:[EMAIL PROTECTED]]
 Sent: Friday, September 20, 2002 12:31 PM
 To: [EMAIL PROTECTED]
 Subject: SQL to find # of mounts for tapes?


 Hi there,

 I need the gurus help on this. I know my 3494 library keeps track of
 how
 many mounts each 3590 tape endures, but I was wondering if there was a
 way
 through tsm to find that out as well, most likely I assume through
 TSM/SQL.
 I'm looking for:

 Number of mounts for each tape
 Number of writing sessions to each tape (ie a backup or archive
 session, or
 reclamation write) Number of reading sessions from each tape (ie a
 restore
 session or a reclamation read)


 Any help would be greatly appreciated!

 I'm currently running Server Version  4.1.5 on AIX 4.3.3 however I will
 soon
 be migrating to 5.1.x


 Thanks in advance,


 Thomas Walker
 212-408-8311
 Unix Admin/TSM Admin
 EMI Recorded Music, NA

 This e-mail including any attachments is confidential and may be
 legally
 privileged. If you have received it in error please advise the sender
 immediately by return email and then delete it from your system. The
 unauthorized use, distribution, copying or alteration of this email is
 strictly forbidden.

 This email is from a unit or subsidiary of EMI Recorded Music, North
 America




Re: 4.2.x selects from DRIVE table

2002-09-19 Thread Mr. Lindsay Morris

We saw the same thing.  You have to select device, device_type, ..., not
just select device.
Has to do with some database changes made in TSM - not sure what.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Jolliff, Dale
 Sent: Thursday, September 19, 2002 9:14 AM
 To: [EMAIL PROTECTED]
 Subject: 4.2.x selects from DRIVE table


 Something odd I noticed this morning about 4.2.x servers on AIX
 and Solaris
 -

 query drive f=d
 and
 select * from drives

 Displays values in the Device field, whereas a

 select device from drives

 displays a blank device field.

 This holds for 4.2.1.15 on Solaris and 4.2.2.{patch of the day} on AIX.

 I went back to an older server (3.x) on AIX, and a select device from
 drives  shows the devices, as I expected.

 Anyone else able to reproduce this?  Is this WAD or a new 'feature'?




Re: How can I tell when an archive was performed?

2002-09-19 Thread Mr. Lindsay Morris

If you have the accounting log (...server/bin/dsmaccnt.log) reaching that
far back (and it does not prune itself automatically, so you probably do, if
you have accounting turned on at all),
then you can awk your way through that looking for the KB-archived field
being non-zero.

Or maybe, from the client, you could just say q arch...?

-
Mr. Lindsay Morris
Lead Architect, Servergraph
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Michael Moore
 Sent: Thursday, September 19, 2002 1:06 PM
 To: [EMAIL PROTECTED]
 Subject: How can I tell when an archive was performed?


 We do little to no archiving, we tested with it in the past, but our
 infrastructure cannot support full archives on the nodes required.  But,
 someone has done some archiving, other that what we tested with.

 I cannot tell by looking at the filespace, due to the fact that
 the node is
 backed up daily.

 How can I tell when this took place?

 Thanks!!

 Michael Moore
 VF Services Inc.
 121 Smith Street
 Greensboro,  NC  27420-1488

 Voice: 336-332-4423
 Fax: 336-332-4544




Re: Backup reporting

2002-09-18 Thread Mr. Lindsay Morris

In developing Servergraph/TSM, we've tried various techniques to see bytes
transferred.
The summary table has been unreliable;
The ANE49xx messages from the client that appear in the activity log have
been unreliable;
but the accounting log HAS been reliable.

One gotcha is that the total bytes field apparently includes overhead.
That is to say, there are six bytes-trasmitted fields in the accounting log
record:
KB backed-up
KB restored
KB archived
KB retrieved
KB HSM-migrated
KB HSM-recalled
and then there's a seventh field, total KB transferred.

The seventh total-bytes field is always MORE than the sum of the first six.
What's the difference?

I'd ask Andy, but we think it's
TSM server sending the metadata down to the client before backup starts,
and/or
files being retransmitted because the client detected that they changed
during backup.

Our current graphs show this quantity over time, averaged across all nodes.
It typically runs 10-40% over time.
Does this really mean that there's 10-40% wasted network traffic?
And could this go down (making a site more efficient) if the site used
Journal-based backup?

-
Mr. Lindsay Morris
Lead Architect, Servergraph
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Bill Boyer
 Sent: Wednesday, September 18, 2002 3:10 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Backup reporting


 There was a problem where the bytes transferred in the summary table as
 zero. It has been fixed in later patch levels. I'm not sure what the APAR
 number is or the level where it was fixed.

 If you need this data, turn on the accounting records. There is an
 additional field Amount of backup files, in kilobytes, sent by the client
 to the server in addition to the Amount of data, in kilobytes,
 communicated between the client node and the server during the
 session. The
 bytes communicated is the total bytes transferred and includes and
 re-transmissions/retries. I believe the Amount of backup files, in
 kilobytes, sent by the client to the server is just what was sent AND
 stored in TSM.

 I haven't fully looked into this, but if I'm trying to get a total for the
 amount of data backed up I would be using this field as opposed
 to the bytes
 transmitted field. Something for me to add to my Honey-Do list..:-)

 Bill Boyer
 DSS, Inc.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Mark Bertrand
 Sent: Wednesday, September 18, 2002 2:39 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Backup reporting


 Paul and all,

 When I attempt to use any of the following select statements my Total MB
 returned is always 0. I get my list of nodes but there is never
 any numbers
 for size.

 Since this is my first attempt at select statements, I am sure I doing
 something wrong. I have tried from command line and through macro's.

 I am trying this on a W2K TSM v4.2.2 server.

 Thanks,
 Mark B.

 -Original Message-
 From: Seay, Paul [mailto:[EMAIL PROTECTED]]
 Sent: Monday, September 16, 2002 11:43 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Backup reporting


 See if these will help:

 /* SQL Script:   */
 /*   */
 /* backup_volume_last_24hours.sql*/
 /* Date   Description*/
 /* 2002-06-10 PDS Created*/

 /* Create Report of total MBs per each session */

 select entity as Node Name  , cast(bytes/1024/1024 as decimal(10,3))
 as Total MB  ,  cast(substr(cast(end_time-start_time as
 char(17)),3,8) as
 char(8)) as Elapsed  , substr(cast(start_time as  char(26)),1,19) as
 Date/Time  , case when cast((end_time-start_time) seconds as
 decimal) 0 then  cast(bytes/cast((end_time-start_time) seconds as
 decimal)/1024/1024 as decimal(6,3)) else cast(0 as decimal(6,3)) end as  
 MB/Sec from summary where start_time=current_timestamp - 1 day and
 activity='BACKUP'

 /* Create Report of total MBs and length of backup for each node */

 select entity as Node Name  , cast(sum(bytes/1024/1024) as
 decimal(10,3)) as Total MB,  substr(cast(min(start_time) as
 char(26)),1,19) as Date/Time   ,
 cast(substr(cast(max(end_time)-min(start_time)  as char(20)),3,8) as
 char(8)) as Lengthfrom summary where start_time=current_timestamp -
 22 hours and  activity='BACKUP' group by entity

 /* Create Report of total backed up*/

 select sum(cast(bytes/1024/1024/1024 as decimal(6,3))) Total GB Backup
 from summary where start_time=current_timestamp  - 1 day and
 activity='BACKUP'




Re: Q OCC

2002-09-09 Thread Mr. Lindsay Morris

And the next step is: you can squeeze out the empty space within
aggregates by using reclamation or (at least on recent versions of TSM)
MOVE DATA.


-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Cook, Dwight E
 Sent: Monday, September 09, 2002 10:07 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Q OCC


 You have to go down a ways in the help to find this but here it is...

 Physical Space Occupied (MB)
  The amount of physical space occupied by the file space.
 Physical space
  includes empty space within aggregate files, from which
 files may have
  been deleted or expired.

 Logical Space Occupied (MB)
  The amount of space occupied by logical files in the file space.
  Logical space is the space actually used to store files, excluding
  empty space within aggregates.

 Dwight E. Cook
 Software Application Engineer III
 Science Applications International Corporation
 509 S. Boston Ave.  Suite 220
 Tulsa, Oklahoma 74103-4606
 Office (918) 732-7109



 -Original Message-
 From: David E Ehresman [mailto:[EMAIL PROTECTED]]
 Sent: Monday, September 09, 2002 8:33 AM
 To: [EMAIL PROTECTED]
 Subject: Q OCC


 On the output from a Query Occ command, what is the difference between
 the Physical Space Occupied and the Logical Space Occupied figures?

 David Ehresman




Re: 3494 Utilization

2002-09-09 Thread Mr. Lindsay Morris

TSMManager doesn't do predictions of when you're going to run out of space.
Servergraph/TSM is better for serious capacity planning.

Should some impartial third party review the various TSM monitoring products
and send the list a write-up?
Paul Seay? Dwight Cook? Wanda Prather? Somebody?

We, of course, have detailed feature-by-feature comparisons, but can't
really claim to be impartial  ;-}
-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Cahill, Ricky
 Sent: Monday, September 09, 2002 10:01 AM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 Take a look at www.tsmmanager.com this will do exactly what you
 want and in
 a graphical format, you can download it and use the evaluation license to
 get the info you need. To be honest after now using this for a couple of
 months I can't see how anyone could do without it. It's especially good at
 doing nice pretty reports for the management to get them off your back and
 give them more paper to pass around in meetings ;)

.Rikk

 -Original Message-
 From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
 Sent: 09 September 2002 13:44
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 In the long run, we are attempting to quantify exactly how busy our
 ATLs/drives are over time for a number of reasons -- capacity planning,
 adjusting schedules to better utilize resources, and possibly even justify
 the purchase of new tape drives.

 At this point I have been asked to simply come up with a minutes or hours
 per 24 hour period any particular drive is in use.

 A query mount every minute might work, but it just isn't a good solution
 for two reasons -- for clients writing directly to tape, the mounted tape
 won't show up in query mount, and most of these servers already have an
 extensive number of scripts accessing them periodically for various
 monitoring and reporting functions - I hesitate to add any more to them.

 My last resort is going to be to extract the activity log once every 24
 hours and examine the logs and match the mount/dismounts by drive and
 attempt to calculate usage that way if there isn't something better.  With
 the difficulty in matching mounts to dismounts, I'm not entirely convinced
 it's worth the trouble.


 -Original Message-
 From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]]
 Sent: Friday, September 06, 2002 4:55 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 If you use the atape device driver, you (supposedly) can turn on logging
 within it.  Then every dismount writes a record of how many bytes were
 read/written during that mount.
 Never tried it ... if you can get it working, let me know how,
 please! We'd
 love to be able to do that.

 Right now we CAN show you library-as-a-whole data rates, just by layering
 all the tape-drive-writing tasks (migration, backup stgpool,
 backup DB, etc)
 one atop the other minute by minute.  Maybe that's enough - why
 do you need
 drive-by-drive data rates?

 -
 Mr. Lindsay Morris
 CEO, Servergraph
 www.servergraph.com
 859-253-8000 ofc
 425-988-8478 fax


  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Jolliff, Dale
  Sent: Friday, September 06, 2002 2:16 PM
  To: [EMAIL PROTECTED]
  Subject: Re: 3494 Utilization
 
 
  Paul said that Servergraph has this functionality - According to our
  hardware guys, the 3494 library has some rudimentary mount statistics
  available.
 
  I'm going to be looking into both of those options.
 
  Surely someone has already invented this wheel when trying to
 justify more
  tape drives - other than pointing to the smoke coming from the
 drives and
  suggesting that they are slightly overused
 
 
 
 
 
  -Original Message-
  From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
  Sent: Friday, September 06, 2002 6:45 AM
  To: [EMAIL PROTECTED]
  Subject: Re: 3494 Utilization
 
 
  Good question, never actually thought about it...
  I would think that the sum of the difference between mount 
  dismount times
  for each drive...
  OH THANKS. now I won't be able to sleep until I code some select
  statement to do this :-(
  if I figure it out, I'll pass it along
 
  Dwight
 
 
 
  -Original Message-
  From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
  Sent: Thursday, September 05, 2002 10:04 AM
  To: [EMAIL PROTECTED]
  Subject: 3494 Utilization
 
 
  I saw this topic out on ADSM, and I could not locate any type of
  functional
  resolution ...
 
  What is everyone using to calculate the wall time of your tape drive
  utilization?
 


 **
 **
 Equitas Limited, 33 St Mary Axe, London EC3A 8LL, UK
 NOTICE: This message is intended only for use

Re: 3494 Utilization, and other Reporting Concerns

2002-09-09 Thread Mr. Lindsay Morris

Some people on the list DO object to vendor self-interested posts - so there
is an advertising-related place at www.adsm.org. Refresh it a few times (5?)
and you should see all the banner ads.

It's a dilemma for us, certainly - we're big fans of TSM, and want to
support the TSM community with our reporting product.  But NOBODY wants to
see this excellent list degenerate into nothing-but-ads.

So we sometimes offer tips that we've learned while building Servergraph/TSM
(like, the summary table is unreliable and not real-time), but try to avoid
blatant self-promotion.  If we see that other vendors ARE using the list
inappropriately... well, we just hope people will do enough research to find
the products that work best.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Richard Cowen
 Sent: Monday, September 09, 2002 10:52 AM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization, and other Reporting Concerns


 If the List doesn't mind occasional vendor self-interested posts, I would
 like to offer one for CNT's TSM Reporting Service.  This is just
 one aspect
 of a more encompassing offering of TSM consulting and implementation
 services, but it may be germane to this thread.  If we get a volunteer
 reviewer, I would be happy to send sample HTML's



 -Original Message-
 From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]]
 Sent: Monday, September 09, 2002 10:45 AM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 TSMManager doesn't do predictions of when you're going to run out
 of space.
 Servergraph/TSM is better for serious capacity planning.

 Should some impartial third party review the various TSM
 monitoring products
 and send the list a write-up?
 Paul Seay? Dwight Cook? Wanda Prather? Somebody?

 We, of course, have detailed feature-by-feature comparisons, but can't
 really claim to be impartial  ;-}
 -
 Mr. Lindsay Morris
 CEO, Servergraph
 www.servergraph.com
 859-253-8000 ofc
 425-988-8478 fax


  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Cahill, Ricky
  Sent: Monday, September 09, 2002 10:01 AM
  To: [EMAIL PROTECTED]
  Subject: Re: 3494 Utilization
 
 
  Take a look at www.tsmmanager.com this will do exactly what you
  want and in
  a graphical format, you can download it and use the evaluation
 license to
  get the info you need. To be honest after now using this for a couple of
  months I can't see how anyone could do without it. It's
 especially good at
  doing nice pretty reports for the management to get them off
 your back and
  give them more paper to pass around in meetings ;)
 
 .Rikk
 
  -Original Message-
  From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
  Sent: 09 September 2002 13:44
  To: [EMAIL PROTECTED]
  Subject: Re: 3494 Utilization
 
 
  In the long run, we are attempting to quantify exactly how busy our
  ATLs/drives are over time for a number of reasons -- capacity planning,
  adjusting schedules to better utilize resources, and possibly
 even justify
  the purchase of new tape drives.
 
  At this point I have been asked to simply come up with a
 minutes or hours
  per 24 hour period any particular drive is in use.
 
  A query mount every minute might work, but it just isn't a
 good solution
  for two reasons -- for clients writing directly to tape, the
 mounted tape
  won't show up in query mount, and most of these servers
 already have an
  extensive number of scripts accessing them periodically for various
  monitoring and reporting functions - I hesitate to add any more to them.
 
  My last resort is going to be to extract the activity log once every 24
  hours and examine the logs and match the mount/dismounts by drive and
  attempt to calculate usage that way if there isn't something
 better.  With
  the difficulty in matching mounts to dismounts, I'm not
 entirely convinced
  it's worth the trouble.
 
 
  -Original Message-
  From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]]
  Sent: Friday, September 06, 2002 4:55 PM
  To: [EMAIL PROTECTED]
  Subject: Re: 3494 Utilization
 
 
  If you use the atape device driver, you (supposedly) can turn on logging
  within it.  Then every dismount writes a record of how many bytes were
  read/written during that mount.
  Never tried it ... if you can get it working, let me know how,
  please! We'd
  love to be able to do that.
 
  Right now we CAN show you library-as-a-whole data rates, just
 by layering
  all the tape-drive-writing tasks (migration, backup stgpool,
  backup DB, etc)
  one atop the other minute by minute.  Maybe that's enough - why
  do you need
  drive-by-drive data rates?
 
  -
  Mr. Lindsay Morris
  CEO, Servergraph
  www.servergraph.com
  859-253-8000 ofc

Re: BareMetalRestore

2002-09-09 Thread Mr. Lindsay Morris

AIX has NIM: Network Install Manager, which lets you keep bootable images on
a remote network drive.
Other OSes may have similar functionality (does HP Ignite let you keep the
bootable image on a remote network drive?)

So, make that bootable image monthly (say), then back up that remote network
drive with TSM, and you have bootable images within TSM.  TSM cannot
directly manage tape volumes written by mksysb or Ignote, etc. -  a shame,
because TSM does the off-site vaulting, etc.  But TSM *CAN* manage mkysysb
images built like this.

I think some people are using this technique today with good success.  It's
simple enough - you should be able to automate it without too much effort.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 McDonald, Rick LDB:EX
 Sent: Monday, September 09, 2002 11:54 AM
 To: [EMAIL PROTECTED]
 Subject: FW: BareMetalRestore


 Ignite has the capability to do this, if you have more than 1
 HPUX machine.
 A Golden Image can be created and stored on an Ignite server.
 This server
 can then be used to push the image to the target machine.  Alternatively,
 the target machine can be booted using a network reboot(Interrupt the boot
 process, and choose the network option for the boot source.)
 Then from the
 server machine complete the install using the Ignite GUI.


 -Original Message-
 From: Don France [mailto:[EMAIL PROTECTED]]
 Sent: Saturday, September 07, 2002 9:51 AM
 To: [EMAIL PROTECTED]
 Subject: Re: BareMetalRestore


 If you can figure a way to put Ignite tape to a file (Hmm, re-directed
 output?), then figure out how to move that file to 4mm/8mm tape
 when/if you
 need to recover the HP box, the daily backups could be swept during normal
 incrementals -- We use a central NFS server for all the
 mksysb/Ignite/jumpstart images, then when a server needs
 recovery, the most
 recent version is still on disk -- just gotta solve the reverse of the
 re-direct to get put the image back to a tape for the HP system boot.
 Similar approach for Win2K...  run NTbackup for System State to a file,
 backup the file (weekly mgmt class used, to avoid doing 250 MB for every
 server every day).

 Don France
 Technical Architect -- Tivoli Certified Consultant
 Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
 San Jose, Ca
 (408) 257-3037
 mailto:[EMAIL PROTECTED]

 Professional Association of Contract Employees
 (P.A.C.E. -- www.pacepros.com)



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Robin Sharpe
 Sent: Friday, September 06, 2002 12:58 PM
 To: [EMAIL PROTECTED]
 Subject: Re: BareMetalRestore


 I'd be happy if TSM provided a means of managing the media
 containing my HP
 Ignite-UX images...  currently, that has to be done manually.

 Robin Sharpe
 Berlex Labs




Re: re-evaluate tsm

2002-09-06 Thread Mr. Lindsay Morris

Michelle, you can get snapshots of your server's behavior with our
Servergraph/TSM product, and a GUI that shows the setup pretty clearly.  The
30-day free trial might get you what you need.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Michelle Wiedeman
 Sent: Friday, September 06, 2002 8:18 AM
 To: [EMAIL PROTECTED]
 Subject: re-evaluate tsm


 Hi,
 I've started working in a new company, who have asked me to re-evaluate
 their system.
 There is no documentation availeble in the company on how the systems been
 setup, neither there is a procedure on how to handle clinet s and backups.
 My question is, if there r peolple who  are experienst in this and if they
 have some sort of procedure and/or checklist on how to document a
 tsm server
 in order to perfect it.
 thnx,
 )hell




Re: How to force a migrate

2002-09-06 Thread Mr. Lindsay Morris

If you set highmigration=0, a bad thing happens when some user tries to do a
mnual backup midday.  The first byte that lands in the pool triggers a
migration.  That ends very quickly, the nthe second byte (OK I exaggerate a
bit) starts a new migration...

The effect is that the TSM server thrashes madly starting and stopping
migrations.
So we suggest highmigration=5 lo=0.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Farren Minns
 Sent: Friday, September 06, 2002 8:48 AM
 To: [EMAIL PROTECTED]
 Subject: Re: How to force a migrate


 Thanks to everyone who replied.

 I knew there must be a simple explanation.

 Farren :)







 [EMAIL PROTECTED] [EMAIL PROTECTED]@VM.MARIST.EDU on
 06/09/2002 12:31:32

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]


 To:[EMAIL PROTECTED]
 cc:
 Subject:Re: How to force a migrate


 You can force the migration with:
 High Migration Threshold: 0
 Low Migration Threshold: 0

 Regards
 Paolo Nasca
 Datasys Informatica SpA
 Via Edilio Raggio, 4
 16124 Genova - Italy
 010 24858.11
 010 24858.879
 [EMAIL PROTECTED]
 [EMAIL PROTECTED]


  Hello TSMers
 
  Can anyone tell me how to force a migrate of our disk backuppool to th
 e on
  site tapepool.
 
  I know migration kicks in when the disk pool becomes 70% utilised, but
  I
  need to make it happen in preparation for a final backup to tape befor
 e
  performing an upgrade of our server code.
 
  Thanks
 
  Farren Minns - Trainee TSM and Solaris system admin
 
  **
 
 
  Our Chichester based offices are amalgamating and relocating to a new
  address
  from 1st September 2002
 
  John Wiley  Sons Ltd
  The Atrium
  Southern Gate
  Chichester
  West Sussex
  PO19 8SQ
 
  Main phone and fax numbers remain the same:
  Phone +44 (0)1243 779777
  Fax   +44 (0)1243 775878
  Direct dial numbers are unchanged
 
  Address, phone and fax nos. for all other Wiley UK locations are uncha
 nged
  **
 
 



 **
 

 Our Chichester based offices are amalgamating and relocating to a new
 address
 from 1st September 2002

 John Wiley  Sons Ltd
 The Atrium
 Southern Gate
 Chichester
 West Sussex
 PO19 8SQ

 Main phone and fax numbers remain the same:
 Phone +44 (0)1243 779777
 Fax   +44 (0)1243 775878
 Direct dial numbers are unchanged

 Address, phone and fax nos. for all other Wiley UK locations are unchanged
 **
 




Re: 3494 Utilization

2002-09-06 Thread Mr. Lindsay Morris

If you use the atape device driver, you (supposedly) can turn on logging
within it.  Then every dismount writes a record of how many bytes were
read/written during that mount.
Never tried it ... if you can get it working, let me know how, please! We'd
love to be able to do that.

Right now we CAN show you library-as-a-whole data rates, just by layering
all the tape-drive-writing tasks (migration, backup stgpool, backup DB, etc)
one atop the other minute by minute.  Maybe that's enough - why do you need
drive-by-drive data rates?

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Jolliff, Dale
 Sent: Friday, September 06, 2002 2:16 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 Paul said that Servergraph has this functionality - According to our
 hardware guys, the 3494 library has some rudimentary mount statistics
 available.

 I'm going to be looking into both of those options.

 Surely someone has already invented this wheel when trying to justify more
 tape drives - other than pointing to the smoke coming from the drives and
 suggesting that they are slightly overused





 -Original Message-
 From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
 Sent: Friday, September 06, 2002 6:45 AM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 Utilization


 Good question, never actually thought about it...
 I would think that the sum of the difference between mount 
 dismount times
 for each drive...
 OH THANKS. now I won't be able to sleep until I code some select
 statement to do this :-(
 if I figure it out, I'll pass it along

 Dwight



 -Original Message-
 From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, September 05, 2002 10:04 AM
 To: [EMAIL PROTECTED]
 Subject: 3494 Utilization


 I saw this topic out on ADSM, and I could not locate any type of
 functional
 resolution ...

 What is everyone using to calculate the wall time of your tape drive
 utilization?




Re: TSM and HSM

2002-08-29 Thread Mr. Lindsay Morris

OTG Disk Extender has big problems, I hear.
It seems to work, but has to move huge amounts of data when recalling a
file - or something like that.

hate to go around baching products just on hearsay, but I've heard that
twice from experienced people.
Anybody have any GOOD experiences with OTG DiskXTender?

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Nast, Jeff
 Sent: Thursday, August 29, 2002 5:15 PM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM and HSM


 The TSM Space Manager (HSM) only supports AIX. You would need to
 use the OTG
 Disk Extender for a Windows environment.

 see:
 http://www.tivoli.com/products/resources/storage-mgr/related-products.html

 -Jeff Nast



 -Original Message-
 From: Stewart, Curtis [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, August 29, 2002 3:58 PM
 To: [EMAIL PROTECTED]
 Subject: TSM and HSM


 I'm investigating using TSM for Hierarchical Storage Management. I'm in a
 shop with Solaris and Windows 2000 file servers. Does anyone reading this
 list have experience with Tivoli Storage Manager and HSM?

 The only documentation I've found is for Unix clients. Does it
 even support
 Windows file servers? Any general thoughts about the subject are greatly
 appreciated. Links would be welcome too.

 Curtis Stewart




Re: DB2 restore takes long

2002-08-15 Thread Mr. Lindsay Morris

If you have accounting turned on, look at the accounting log to see whether
the time was spent mostly in:
--idle wait (client creating files is slower than client reading them!)
--media wait (was client data scattered over 30 tapes?)
--comm wait (was network slow then?)
It's in dsmaccnt.log, in your server bin directory.

We have some detailed info on how to do this at
http://www.servergraph.com/techtip3.htm

Hope this helps.
-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Crawford, Lindy
 Sent: Thursday, August 15, 2002 3:26 AM
 To: [EMAIL PROTECTED]
 Subject: DB2 restore takes long


 Hi TSMers,
 Could some please assist me. I did a restore yesterday, started it 8:13am
 and it only completed at 21:04 pm. This for the environment that we work
 with it is unacceptable in terms of response. My client  - node is as
 follows:_
 TSM 4.2
 DB2 7.2
 SAP R3 4.6c
 Database size is : 48gigs
 What can I do to make this process faster. The reason why I am
 concerned is
 that my backup only takes 1h30mins, so the most time it took I would have
 expected it to be 2 or 3hours.

 Please help!!!



  Lindy Crawford
  Business Solutions: IT
  BoE Corporate
 
  *  +27-31-3642185
  ...OLE_Obj...  +27-31-3642946
  [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]





 WARNING:
 Any unauthorised use or interception of this email is illegal. If
 this email
 is not intended for you, you may not copy, distribute nor disclose the
 contents to anyone. Save for bona fide company matters, BoE Ltd does not
 accept any responsibility for the opinions expressed in this email.
 For further details please see: http://www.boe.co.za/emaildisclaimer.htm




Re: Antwort: Web Admin Interface - grrr

2002-08-14 Thread Mr. Lindsay Morris

Kai, we have a configurable GUI that we're beta testing.

It's web-based.

You tell a configuration file what objects (ie stgpools, drives, policysets,
etc) you want,
and what fields to display for each one of those;

Then it builds (for each object) a table of all (let's say) stgpools, one
stgpool per line, with those fields shown.
The data for the fields is editable, so you just look, change what you want,
and hit update.

A very cool thing is that in addition to the update action on a record,
you can define custom actions.
For example, for stgpools, you can define a migrate action, which runs:
upd stgp pool-name hi=100; upd stgp pool-name hi=5 lo=0;
   that is, set highmigration point up, then down to reliably kick off
migration.

The end result is that with 2 or 3 clicks, you can do the common jobs - and
customize them to your taste.

Let me know if you want to test this out.
-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Kai Hintze
 Sent: Wednesday, August 14, 2002 12:51 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Antwort: Web Admin Interface - grrr


 But it is precisely because I _do_ have boatloads of clients that I prefer
 the CLI. I can type 2 or 3 lines of parameters as someone said a few
 messages back much more quickly than I can click wait click wait
 click wait 

 More, I seldom type long parameters. I have samples of anything I have
 defined/updated/deleted that I can quickly copy to a text editor,
 change the
 few characters that change, and dump into a CLI session. I specify
 everything so that if the defaults change they don't take me by surprise,
 and I seldom have to look back to the manuals find a syntax. The
 only trick
 is when I change versions I do have to go through the list and compare to
 the new Admin Reference.

 - Kai.

  -Original Message-
  From: Kauffman, Tom [mailto:[EMAIL PROTECTED]]
  Sent: Tuesday, 13 August 2002 8:38 AM
  To: [EMAIL PROTECTED]
  Subject: Re: Antwort: Web Admin Interface - grrr
 
 
  I'm afraid I'm another vote for the CLI.

 snip . . .
 
  But I don't have boatloads of clients to administer, so I'm
  probably not
  seeing the issues the rest of you are fighting. :-)
 
  Tom Kauffman
  NIBCO, Inc
 




Re: Recovering from a disaster ....

2002-04-30 Thread Mr. Lindsay Morris

i wonder if AutoVault might do you some good.  It's a DRM replacment -  does
clever things like vault primary pools (sounds dumb, but they might be
7-year archives just taking up valualbe slots), and vaults backup sets, etc.
www.coderelief.com

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Bill Mansfield
 Sent: Tuesday, April 30, 2002 9:40 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Recovering from a disaster 


 No copy storage pools...  how do you handle damaged tapes?



 _
 William Mansfield
 Senior Consultant
 Solution Technology, Inc





 Cook, Dwight E [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 04/30/2002 07:48 AM
 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: Recovering from a disaster 


 OK, I've been working with TSM (ADSM) for about 6 years now and you can
 call
 me cheap but I never (personally) though DRM was worth the money.  We do
 operate in a unique environment here so I shouldn't say that DRM has no
 place in the market, it is just that I was doing DRM before DRM came out
 and
 once it did I couldn't justify the cost just to replace all that I had
 done
 over the years.
 We don't really run with copy storage pools... our TSM servers are located
 offsite to the production boxes that they backup so backups are
 effectively
 offsite as soon as they are created.  We also deal with so much data
 across our 10 TSM servers on a daily basis that we would have to make them
 20 if we were to copy all the data on a daily basis, and that just isn't
 going to happen.
 Now what sort of disaster am I protecting against ?
 Total loss of environment due to hardware failure.  Not really counting
 fire, flood, water, etc...
 If my actual server goes dead, AS LONG AS I HAVE MY ATL, well at least the
 tapes, I'm OK.
 So to answer your question, almost yes.
 You need your db backup (from tape, disk, somewhere), you need definitions
 of your data base  log files (I always allocate them the same size as
 they
 were), the device configuration file is nice (just about necessary).
 With that info you can get back your environment as long as you have your
 tapes.

 Dwight E. Cook
 Software Application Engineer III
 Science Applications International Corporation
 509 S. Boston Ave.  Suit 220
 Tulsa, Oklahoma 74103-4606
 Office (918) 732-7109



 -Original Message-
 From: Sandra Ghaoui [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 30, 2002 5:43 AM
 To: [EMAIL PROTECTED]
 Subject: Recovering from a disaster 


 Hello everybody,

 I have one more question ...
 is it possible to recover from a disaster just by
 having the TSM database backup and our data backup on
 tapes?
 I've been reading about the Disaster Recovery Manager
 and if I got it right, I would need to have copy
 storage pools to recover from disaster?

 thx for helping ...

 Sandra


 __
 Do You Yahoo!?
 Yahoo! Health - your guide to health and wellness
 http://health.yahoo.com




Re: ADITOCC table update

2002-04-26 Thread Mr. Lindsay Morris

BTW, everything in the AUDITOCC table is also in the OCCUPANCY table.
If you're writing scripts to boil that down, you may want to just use
OCCUPANCY - it's updated all the time, not just when you run AUDIT LICENSE.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Lawrence Clark
 Sent: Friday, April 26, 2002 9:28 AM
 To: [EMAIL PROTECTED]
 Subject: ADITOCC table update


 Does anyone know the circumstances when the AUDITOCC table is
 updated? The reason I ask is I noticed that two archive storage
 pools were not included in the copypool and added a script to
 include them. Checking the AUDITOCC table after serveral days
 runs, and many an increase in the copypool tapes produced, I
 noticed the ARCHIVE_COPY_MB was still at 0. Then I ran the same
 commands directly from the console, no additional files were
 copied over, and the firgures were updated.

 Anyone know why this would be so?

 NODE_NAME   BACKUP_MB   BACKUP_COPY_MB  ARCHIVE_MB
 ARCHIVE_COPY_MBSPACEMG_MB  SPACEMG_COPY_MB TOTAL_MB
 TOLLCOLL210898  210513  267898  267155  0   0   956464
 TOLLDBMS123110  122936  83058   83060   0   0   412164

 Larry Clark
 NYS Thruway Authority
 (518)-471-4202
 Certified:
 Aix 4.3 System Administration
 Aix 4.3 System Support
 Tivoli ADSM/TSM V 3 Consultant




Re: Network Tuning

2002-04-26 Thread Mr. Lindsay Morris

You'll have a hard time distinguishing TSM-based network activity from all
the other network activity on that NIC.
One way, though, is to download the Servergraph/TSM demo
(www.servergraph.com) - it will process the accounting log, add up all the
data sent/received BY TSM SESSIONS every hour and show you THAT throughput
over time  - as well as direct readings of the NIC, showing ALL throughput.

So that should give you a pretty good picture of what you want.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Paul Ward
 Sent: Friday, April 26, 2002 2:55 PM
 To: [EMAIL PROTECTED]
 Subject: Network Tuning


 Hi all

 I am fairly new to TSM and I am not sure how network tuning is to be done
 in the TSM 4.2.1 environment. My current problem is that I want to be able
 to ensure that TSM does not use anymore than say 30 percent of the total
 bandwidth. Is there anyone that could give me some help or an
 idea of where
 to look through the manuals or what settings need to be changed.

 thanks in advance

 Paul




Re: TSM network problem

2002-04-15 Thread Mr. Lindsay Morris

Check to see that the switch port is also set to 100/full.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Jim Healy
 Sent: Monday, April 15, 2002 2:25 PM
 To: [EMAIL PROTECTED]
 Subject: TSM network problem


 Can any of you network gurus help me out with a TSM problem?

 We currenty have an isolated 100mb ethernet network for TSM.
 We have three NICS in the TSM server, each attached to a seperate V-lan
 We spread the servers backing up across the three v-lans

 We are having on clients on one of the vlans that intermittently get
 session lost re-initializing messages in the dsmsched.log

 When we ping the clients from the TSM server we get no seesion or packet
 loss

 When we ping the TSM nic from the client we get intermittent packet losses

 We replaced the NIC in the TSM server
 We replaced the cable from the TSM server to the switch
 We replaced the cable from the client NIC to the switch

 We've ensured that both NICs are set to 100/full

 My network guys are out of ideas any body have any suggestions?




Re: TSM network problem

2002-04-15 Thread Mr. Lindsay Morris

What I meant, earlier, was that while you may be using 100/full on the TSM
server, and 100/full on the client, the cables plug into a switch in the
middle, and each port on that switch may be configured differently.  So you
ave to telnet to the switch's IP address and log in some how and poke a
round and see that the SWITCH ports are ALSO 100/full.


-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 PINNI, BALANAND (SBCSI)
 Sent: Monday, April 15, 2002 4:03 PM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM network problem


 Hi
 Look for how many hops it takes to reach other end.
 Its tracerout and tracert to see the loss exactly where u are
 missing on the
 HOP.
 Balanand

 -Original Message-
 From: Jim Healy [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 15, 2002 2:36 PM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM network problem


 Both ends are set to 100/full




 Alex Paschal [EMAIL PROTECTED]@VM.MARIST.EDU on 04/15/2002
 03:14:54 PM

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


 To:   [EMAIL PROTECTED]
 cc:

 Subject:  Re: TSM network problem


 Sorry about that, Lindsay.  Replies go directly to you.  Heh, heh.

 Additionally, you might check MTU sizes.  I've seen situations where
 switches/routers were set to one, the clients were set to another, larger,
 if I remember correctly, and a do not split packet caused all sorts of
 havok.  But really, if you're losing packets on ping, your networking guys
 should be able to analyze the packets and tell you what the problem is.

 Alex Paschal
 Storage Administrator
 Freightliner, LLC
 (503) 745-6850 phone/vmail


 -Original Message-
 From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 15, 2002 12:04 PM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM network problem


 Check to see that the switch port is also set to 100/full.

 -
 Mr. Lindsay Morris
 CEO, Servergraph
 www.servergraph.com
 859-253-8000 ofc
 425-988-8478 fax


  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Jim Healy
  Sent: Monday, April 15, 2002 2:25 PM
  To: [EMAIL PROTECTED]
  Subject: TSM network problem
 
 
  Can any of you network gurus help me out with a TSM problem?
 
  We currenty have an isolated 100mb ethernet network for TSM.
  We have three NICS in the TSM server, each attached to a seperate V-lan
  We spread the servers backing up across the three v-lans
 
  We are having on clients on one of the vlans that intermittently get
  session lost re-initializing messages in the dsmsched.log
 
  When we ping the clients from the TSM server we get no seesion or packet
  loss
 
  When we ping the TSM nic from the client we get intermittent packet
 losses
 
  We replaced the NIC in the TSM server
  We replaced the cable from the TSM server to the switch
  We replaced the cable from the client NIC to the switch
 
  We've ensured that both NICs are set to 100/full
 
  My network guys are out of ideas any body have any suggestions?
 




Re: Redirecting Commands in Scripts

2002-04-12 Thread Mr. Lindsay Morris

In some clients, a space before the  makes a difference.
So try upd script test QUERY SYSTEM DSM.OUTPUT


-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Gerhard Wolkerstorfer
 Sent: Friday, April 12, 2002 6:26 AM
 To: [EMAIL PROTECTED]
 Subject: Redirecting Commands in Scripts


 Hello all,
 I found questions like this in the archives, but no answers..

 One more try =
 I want to run a script, where one line should look like this:
 QUERY SYSTEM  DSM.OUTPUT.QSYSTEM
 where DSM.OUTPUT.QSYSTEM is a S390 Filename I want to take to the OFFSITE
 Location.
 (This Command works great on the command line, but I want to have it in a
 script!)

 However - I tried to do it like this:
 def script test desc='Test'
 upd script test QUERY SYSTEM  DSM.OUTPUT

 Result:
 ANR1454I DEFINE SCRIPT: Command script TEST defined.
 ANR2002E Missing closing quote character.
 ANS8001I Return code 3.

 Any hints how to route an output to a file in a script  ?
 I guess, the problem is, that TSM wants to direct the Output of
 the Update Stmt
 to a file - and when doing this, one quote is missing, of course

 We are running TSM 3.7.5 on S390 (I know, not supported, but I
 guess this should
 work on all versions)

 Regards
 Gerhard Wolkerstorfer




Re: 3494 library swap out.

2002-04-12 Thread Mr. Lindsay Morris

Gosh, Bill, that's even better!
Now if we could just cut it down to zero steps...

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Bill Boyer
 Sent: Friday, April 12, 2002 10:29 AM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 library swap out.


 Maybe more of a generic way that doesn't depend on any specific
 OSuse a
 SQL command:

 select 'checkout libv ' || trim(library_name) || ' ' ||
 trim(volume_name) ||
 ' checkl=no rem=no' from libvolumes

 and pipe this to a file. You can then run it in as a MACRO libr Lindsay
 suggests. You may have to SET SQLDISPLAYMODE WIDE to get it all
 on one line.

 This doesn't rely on catawk.MSWord...or any utility. Just a TSM
 admin client.

 There's more than one way to skin a cat! :-) Whatever you're comfortable
 with. Myself, with going to different client sites with different
 systems, I
 just don't like to come to rely on a solution on a particular platform. No
 offense intended!

 Just my $.02 worth.
 Bill Boyer
 DSS, Inc.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Mr. Lindsay Morris
 Sent: Thursday, April 11, 2002 9:45 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 library swap out.


 I guess different people like different tools...I'm a command-line bigot.

 The idea of turning a list of volumes into a script is a very good idea.
 Here's a unix-ified way to do it for those who will, maybe faster:

 1. Obtain a list of volumes (or stgpools, or...) into a file, say,
 /tmp/vols
 2. cat /tmp/vols | awk '{print checkout libv library_name $1 checkl=no
 rem=no}' /tmp/macro
   vol name is here ^^
 3. dsmadmc -id=... -pas=... -itemcommit macro /tmp/macro
 itemcommit means if one line fails, don't roll back the
 whole script

 Not trying to one-up you, Tab - people more comfortable with MS
 Word should
 use it.
 The idea is great.  I often see people doing stuff like this manually, and
 it makes me cringe.
 Thanks for the post.

 -
 Mr. Lindsay Morris
 CEO, Servergraph
 www.servergraph.com
 859-253-8000 ofc
 425-988-8478 fax


  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Tab Trepagnier
  Sent: Thursday, April 11, 2002 6:56 PM
  To: [EMAIL PROTECTED]
  Subject: Re: 3494 library swap out.
 
 
  Geoffrey,
 
  Whenever I have to do similar operations on lots of volumes, I
 use my word
  processor's (MS Word in this case) mail merge feature to help.  The
  sequence goes like this.
 
  1. Obtain a list of volumes to be processed.  If the volume name is the
  only thing that will change, you can save to a text file.  But
 text files
  only accommodate one variable.  If more variables (fields) are
 needed, use
  the native WP doc format.
 
  2. Select the file from step 1 as the data source.
 
  3. Create the master document with the desired command line like:
  checkout  libv  library_name  [volume]  checkl=no  remove=no
 
  where volume is the merge field to be used for the final document.
 
  4. Merge the data.  You will end up with a document that has a line from
  step 3 for each volume.
 
  5. Because that document will be in the native WP format, do a Save
  As...text of the document.
 
  6. Reopen the text version of the document.  You will have a script.
 
  7. Paste that text document's contents into the text entry window of the
  Create Script page of the TSM web client.  The final result will be a
  server script consisting of the command from step 3 for each volume.
 
  8. Execute the server script in TSM.
 
  I know this sounds a bit complicated, but once you've done it a time or
  two, you can create in less that five minutes scripts that process
  thousands of volumes.
 
  This technique was vital when I was doing our data reorg after
 the library
  upgrades and I had to Move Data the contents of hundreds of tapes.
 
  Good luck.
 
  Tab Trepagnier
  TSM Administrator
  Laitram Corporation
 
 
 
 
 
 
 
 
 
  Gill, Geoffrey L. [EMAIL PROTECTED]@VM.MARIST.EDU on
  04/11/2002
  12:34:16 PM
 
  Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
 
  Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]
 
 
  To:   [EMAIL PROTECTED]
  cc:
  Subject:  3494 library swap out.
 
 
  On Monday IBM will be removing a leased 3494 and installing a
 replacement
  3494. The library is shared with MVS but MVS has it's own set
 of tapes and
  exits ignore my tapes(TSM 4.2.1.9 on AIX 4.3.3) when they are inserted.
 
  It's been sugested I checkout all of my tapes, some 800 or
 so, probably
  with a remove=no since a 10 slot I/O port would slow things
 down. IBM says
  they need to remove all the tapes so they can re-level the library once
  they
  put it together.
 
  So now questions: (I am

Re: 3494 library swap out

2002-04-12 Thread Mr. Lindsay Morris

Don't use step 3 if you use step 2. they do the same thing in different
ways.

The output of step2 looks like a perfectly valid macro. Just run step 4.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Gill, Geoffrey L.
 Sent: Friday, April 12, 2002 12:38 PM
 To: [EMAIL PROTECTED]
 Subject: 3494 library swap out


 Ok, lots of help here from Bill, Lindsay and Tab,  and I
 appreciate it very
 much. Now here is a stab at the steps I need to take. Since I'm
 not much of
 a script/programmer I'd appreciate any help in correcting mistakes I've
 made.

 Note: This will be done from an AIX console on the TSM server.

 Step1:  SET SQLDISPLAYMODE WIDE

 Step2:  select 'checkout libv ' || trim(library_name) || ' ' ||
 trim(volume_name) || ' checkl=no rem=no' from libvolumes  /tmp/vols

 Step3:  cat /tmp/vols | awk '{print checkout libv library_name
 $1 checkl=no
 rem=no}' /tmp/macro

 Step4:  dsmadmc -id=... -pas=... -itemcommit macro /tmp/macro

 The output of the file in step2 goes like this:

 checkout libv 3494LIB U8 checkl=no rem=no
 checkout libv 3494LIB U00026 checkl=no rem=no
 checkout libv 3494LIB U00244 checkl=no rem=no
 checkout libv 3494LIB U00302 checkl=no rem=no
 And so on and so on.

 I'm sure my step3, which I took direct from the email Lindsay sent, needs
 work.

 A little help..again please.
 Thanks,

 Geoff Gill
 TSM Administrator
 NT Systems Support Engineer
 SAIC
 E-Mail:   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 Phone:  (858) 826-4062
 Pager:   (877) 905-7154




Re: Performance bottleneck tips

2002-04-11 Thread Mr. Lindsay Morris

Tom, we recently published a viewacct script that tells you quickly whether
the problem in the client, the network, or the server.
See http://www.servergraph.com/techtip3.htm,

Our full-bore product also tells you whether your ENTIRE SITE (not just one
node) is suffering most from client slowdowns, network slowdowns, or server
delays.   Download the demo, and it will probably help you find other
problems -- oops, opportunities

Hope this helps.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On
 Behalf Of Toma9 Hrouda
 Sent: Thursday, April 11, 2002 6:38 AM
 To: [EMAIL PROTECTED]
 Subject: Performance bottleneck tips


 Hi TSMers,

 I need detect the bottleneck of backup performance of TSM W2K
 client (2xPII
 450, 640MB RAM). It is working as fileserver (thousands files with general
 size about 45GB). many files are expiring, many are rebounded
 during backup
 so that means increased server load and lowering of backup performance (I
 marked it during monitoring of backup). Some other symptoms
 perhaps suggests
 problem with network card and switch and possible bad detecting of
 connection speed (10/100Mbit), but problem can be at TSM client side too.
 Can I read out something from performance statistics at and of backup
 (network transfer rate, aggregate rate, data transfer time ..
 etc.), or from
 development of server/client processor load during backup?

 I mean some tips like:  If transfer rate is much higher than aggregate
 rate, problem is on server side ... (that's only example). Simply, how
 characteristics like TSM server load, connection speed, setting of client
 communication parameters, rebinding files .. etc. can affect backup
 statistic values?

 Any suggestion will be appreciated (drowning man plucks at a straw :-)) ).

 Many thanks.
 Tom




Re: 3494 library swap out.

2002-04-11 Thread Mr. Lindsay Morris

I guess different people like different tools...I'm a command-line bigot.

The idea of turning a list of volumes into a script is a very good idea.
Here's a unix-ified way to do it for those who will, maybe faster:

1. Obtain a list of volumes (or stgpools, or...) into a file, say,
/tmp/vols
2. cat /tmp/vols | awk '{print checkout libv library_name $1 checkl=no
rem=no}' /tmp/macro
  vol name is here ^^
3. dsmadmc -id=... -pas=... -itemcommit macro /tmp/macro
itemcommit means if one line fails, don't roll back the whole script

Not trying to one-up you, Tab - people more comfortable with MS Word should
use it.
The idea is great.  I often see people doing stuff like this manually, and
it makes me cringe.
Thanks for the post.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Tab Trepagnier
 Sent: Thursday, April 11, 2002 6:56 PM
 To: [EMAIL PROTECTED]
 Subject: Re: 3494 library swap out.


 Geoffrey,

 Whenever I have to do similar operations on lots of volumes, I use my word
 processor's (MS Word in this case) mail merge feature to help.  The
 sequence goes like this.

 1. Obtain a list of volumes to be processed.  If the volume name is the
 only thing that will change, you can save to a text file.  But text files
 only accommodate one variable.  If more variables (fields) are needed, use
 the native WP doc format.

 2. Select the file from step 1 as the data source.

 3. Create the master document with the desired command line like:
 checkout  libv  library_name  [volume]  checkl=no  remove=no

 where volume is the merge field to be used for the final document.

 4. Merge the data.  You will end up with a document that has a line from
 step 3 for each volume.

 5. Because that document will be in the native WP format, do a Save
 As...text of the document.

 6. Reopen the text version of the document.  You will have a script.

 7. Paste that text document's contents into the text entry window of the
 Create Script page of the TSM web client.  The final result will be a
 server script consisting of the command from step 3 for each volume.

 8. Execute the server script in TSM.

 I know this sounds a bit complicated, but once you've done it a time or
 two, you can create in less that five minutes scripts that process
 thousands of volumes.

 This technique was vital when I was doing our data reorg after the library
 upgrades and I had to Move Data the contents of hundreds of tapes.

 Good luck.

 Tab Trepagnier
 TSM Administrator
 Laitram Corporation









 Gill, Geoffrey L. [EMAIL PROTECTED]@VM.MARIST.EDU on
 04/11/2002
 12:34:16 PM

 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

 Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


 To:   [EMAIL PROTECTED]
 cc:
 Subject:  3494 library swap out.


 On Monday IBM will be removing a leased 3494 and installing a replacement
 3494. The library is shared with MVS but MVS has it's own set of tapes and
 exits ignore my tapes(TSM 4.2.1.9 on AIX 4.3.3) when they are inserted.

 It's been sugested I checkout all of my tapes, some 800 or so, probably
 with a remove=no since a 10 slot I/O port would slow things down. IBM says
 they need to remove all the tapes so they can re-level the library once
 they
 put it together.

 So now questions: (I am not concerned with the MVS side of things since
 someone else takes care of that.)

 Do I really need to do this?

 If yes, is there an easy way to do this or amI going to have to struggle
 through it?

 How about getting them back in/available?

 Should I be doing any TSM inventory?

 If anyone ahs gone through this or has suggestions please, please send
 them.

 Thanks,

 Geoff Gill
 TSM Administrator
 NT Systems Support Engineer
 SAIC
 E-Mail:   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 Phone:  (858) 826-4062
 Pager:   (877) 905-7154




Re: error status in scheduled netware clients

2002-04-08 Thread Mr. Lindsay Morris

You could look at each filespace (query filespace) and see when its last
successful backup date is.

query events may hve problems both as you describe - event says error when
client really completed - and also the opposite - event says completed but
nothing happened.  The latter may happen if the client event action was
command, not inc - the command did indeed get kicked off, so the
scheduler reports a success - but then the command fails and the scheduler
doesn'te tell you that.

You also may want to think about how many missed files are allowed, before
an incremental backup is marked failed.  A few are allowed. (Does anybody
have the formula?)

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Tim Brown
 Sent: Monday, April 08, 2002 7:45 AM
 To: [EMAIL PROTECTED]
 Subject: error status in scheduled netware clients


 i recently saw a posting relating to nt client scheuled events
 appearing as errors
 when the backup did really complete. i am seeing the same
 situation with netware
 clients. the nt resolution was a ptf level. is there anything i
 can do so that these
 netware backup will indicate as completed..


 Tim Brown
 Systems Specialist
 Information Systems
 Central Hudson Gas  Electric
 tel: 845-486-5643
 fax: 845-586-5921




Re: AIX Sys Backup

2002-04-08 Thread Mr. Lindsay Morris

We used to run a shell script that took one the drives offline temporarily,
let AIX use it for mksysb, then set it back online for TSM.  We didn't have
a good way to have that tape volume be managed by TSM - we just put it in
the same box with the rest of the vault tapes, with a manual label on it.

Sorry I no longer have the script around, or I'd list it here.

Seems like we had to say mkdev -l rmt0 and rmdev -l rmt0 in there
somewhere ... but that was the only funny thing about it.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Petur Ey?orsson
 Sent: Monday, April 08, 2002 11:58 AM
 To: [EMAIL PROTECTED]
 Subject: Re: AIX Sys Backup


 If your library supports partitoning (if you have IBM library 3584.)

 You can partition it so that the AIX has use of one drive and some tapes.

 thats one idea.


 Kvedja/Regards
 Petur Eythorsson
 Taeknimadur/Technician
 IBM Certified Specialist - AIX
 Tivoli Storage Manager Certified Professional
 Microsoft Certified System Engineer

 [EMAIL PROTECTED]

  Nyherji Hf  Simi TEL: +354-569-7700
  Borgartun 37105 Iceland
  URL:http://www.nyherji.is


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Gene Greenberg
 Sent: 8. april 2002 15:49
 To: [EMAIL PROTECTED]
 Subject: AIX Sys Backup


 Does anyone have any other ideas other than sysback for an aix mksysb or
 savevg
 to tsm library?  It's frustrating trying to automate all backups and put
 them in
 a single basket.

 Thanks for help,

 Gene




Re: reporting on backup successes/failures

2002-04-08 Thread Mr. Lindsay Morris

I guess they do compression or something -
I see one yesterday that reports 125 MB backed up by the ANE4991 message,
but the TSM server's accounting log message says only 65 MB.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Remeta, Mark
 Sent: Monday, April 08, 2002 9:48 AM
 To: [EMAIL PROTECTED]
 Subject: Re: reporting on backup successes/failures


 Well the q event, like the level 2 tech mentioned, only reports on whether
 the batch command started successfully, not on whether the actual tdp
 commands within the batch were successful. If it is tdp for
 exchange you are
 trying to monitor, you may want to try and do a 'q act begint=whenever
 msg=4991'. This will show you when the tdp sessions start and end and also
 how much data was backed up. Ps: change the whenever to whatever time you
 want to start looking from, I usually put 00:00 to start from midnight.

 Mark


 -Original Message-
 From: Mark Bertrand [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 08, 2002 9:03 AM
 To: [EMAIL PROTECTED]
 Subject: Re: reporting on backup successes/failures


 Let me begin by saying that this rant is in no way meant to be directed to
 Mark Stapleton, his advice has helped me many times and he is a
 great asset
 to the group, he never hesitates to help.

 Be aware, q event ONLY reports on the success or failure of the backup
 script being ran, NOT the actual job.

 Example: Last week the below method was used as it is everyday to
 check the
 status of scheduled jobs and it reported the following.

 04/05/2002 01:30:00 04/05/2002 02:01:54 130AM_EXCHANGE03_DB- EXCHANGE03_DB
 Completed

 But when you actually look into the job you still see it running as a
 session.

 Sess Comm. Sess Wait Bytes Bytes Sess Platform Client Name Number Method
 State Time Sent Recvd Type -- -- -- -- ---
 --- -
   3,102 Tcp/Ip RecvW 0 S 8.0 K 11.6 G Node TDP
 EXCHANGE03_DB MSExc- hgV2 NT

 And the activity log only shows that the Directory completed, the
 Information Store is still being backed up.

 I spoke to level 2 support and I understand that q event only
 reports on the
 script, he suggested that I need to put some kind of wait statement in the
 script to not let it complete until the job actually completes.

 I am not very happy with his suggestion, I am querying the event, I am not
 running a q script!!! I don't want a Band-Aid, I just want a q event that
 works!!!

 Is there another solution within Tivoli to query the actual events?

 BTW, we are running TSM 4.1.3 on W2K server and the example is
 for Exchange
 TDP 2.2 on Exchange 5.5.

 Thanks all,
 Mark B.

 -Original Message-
 From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
 Sent: Sunday, April 07, 2002 10:39 PM
 To: [EMAIL PROTECTED]
 Subject: Re: reporting on backup successes/failures


 On Tue, 5 Mar 2002 10:26:18 -0600, Glass, Peter
 [EMAIL PROTECTED] wrote:

 What would be the best way to produce a report that shows the number of
 client backup successes vs. failures, for a given day?

 This is not as hard as some folks seem to want to make it:

   q event * * begind=start_date endd=end_date

 If you want it in script form:

   def script backup_check 'q event * * $1 $2  /tmp/backup_check'

 You run it by inputting

   run backup_check 04/01/2002 04/03/2002

 --
 Mark Stapleton ([EMAIL PROTECTED])

 Confidentiality Note: The information transmitted is intended only for the
 person or entity to whom or which it is addressed and may contain
 confidential and/or privileged material. Any review, retransmission,
 dissemination or other use of this information by persons or
 entities other
 than the intended recipient is prohibited. If you receive this in error,
 please delete this material immediately.




Re: Script information

2002-04-08 Thread Mr. Lindsay Morris

We get pretty darn close to what you want with the viewacct script.
(Great minds think alike?)

We get the information from the accounting log.
See http://www.servergraph.com/techtip3.htm


-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Ochs, Duane
 Sent: Monday, April 08, 2002 4:35 PM
 To: [EMAIL PROTECTED]
 Subject: Script information


 Does anyone have a script out there that produces the following output ?

 Nodename:  Schedule:  Start Time:  End Time:  Bytes Transferred:  Transfer
 Time:  Transfer Rate:  Elapsed Time:  Aggregate Rate:

 Based on an input time. This is kind of a merge between a q
 event and a q
 actlog.


 Duane Ochs
 Systems Administration
 Quad/Graphics Inc.
 414.566.2375




Re: switches for DSMADMC command

2002-04-04 Thread Mr. Lindsay Morris

I looked in the client manuals first, yup...
then I looked in the admin guide, then the admin reference...
I was looking in the index for:
dsmadmc
parameter
command line
so maybe an index entry would have done it for me.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Andrew Raibeck
 Sent: Wednesday, April 03, 2002 11:42 PM
 To: [EMAIL PROTECTED]
 Subject: Re: switches for DSMADMC command


 H

 OK, that's at least two people who are pretty savvy, who couldn't find
 this info easily.

 Just out of curiosity, where did you try to look? I know the client does
 have a man page (which would help on UNIX only anyway)... did you try
 using the Admin CLI's online HELP faciltiy (enter HELP from the command
 line)? Which books did you check?

 The reason I am asking is to see whether there is any common ground for
 where you did go to look, and maybe we can address the issue. For example,
 if you tried the client books, then maybe it would be helpful if we put a
 pointer in the client books to use the Admin CLI's online HELP or to see
 the Admin Reference.

 Regards,

 Andy

 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
 Internet e-mail: [EMAIL PROTECTED]

 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.




 Malbrough, Demetrius [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 04/03/2002 13:38
 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: switches for DSMADMC command



 There are some in the beginning of the Admin Ref book, e.g. -id,
 -password,
 -quiet, -consolemode etc...

 -Original Message-
 From: Joe Cascanette [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, April 03, 2002 1:56 PM
 To: [EMAIL PROTECTED]
 Subject: switches for DSMADMC command


 Where can I find more switched for this command??

 Thanks

 Joe

 -Original Message-
 From: MC Matt Cooper (2838) [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 01, 2002 10:08 AM
 To: [EMAIL PROTECTED]
 Subject: Re: NEED HELP CONTROLLING SCRIPT OUTPUT, FIELD LENGTHS


 Ike,
 I got it to work with some help from Dennis Glover.  He sent me an
 example of something that worked.  The big difference was that the -COMMA
 option MUST BE LISTED BEFORE THE MACRO option.   So
 DSMADMC -ID=x -PA=x  -COMMA MACRO DD:D2works fine but
 DSMADMC -ID=x -PA=x  MACRO DD:D2 -COMMA does not!
 Matt




Re: Monthly Backups, ...again!

2002-04-04 Thread Mr. Lindsay Morris

This keeps coming up.  It's the hardest thing about TSM, to sell users on
the way it works.

Tivoli's Storage Vision whitepaper has a comparison of the benefits you get
by NOT using this Grandfather-father-son technique, but I wish somebody at
Tivoli would come up with some better assistance to help us sell the
incremental-forever -ooops, progressive backup methodolgy - to non-techie
users.  (Maybe it's there and I just don't know where to find it...?)

I think Kelly Lipp has a good article on archiving and when it's sensible -
maybe he'll post that link here again.

Also, maybe some users have specific oddball scenarios they have run into
that require surprising policy settings. It would be interesting to hear
about those.  Like, the user who goes on vacation for two weeks, and manages
to trash here email file the day she leaves, doesn't notice it, Lotus
touches the damaged file every day so it gets backed up again, and they
don't keep 14 versions, so she gets back and the only good version (15 days
old) has rolled off (expired).

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Marc Levitan
 Sent: Thursday, April 04, 2002 8:51 AM
 To: [EMAIL PROTECTED]
 Subject: Monthly Backups, ...again!


 A question was brought up while discussing retention policies.

 Currently we have the following retentions:

 PolicyPolicyMgmt  Copy  Versions Versions   Retain  Retain
 DomainSet Name  Class Group Data DataExtraOnly
 NameName  NameExists  Deleted Versions Version
 - - - -    ---
 COLD  ACTIVECOLD  STANDARD 215  30

 NOVELLACTIVEDIRMC STANDARD301  120 365
 NOVELLACTIVESTANDARD  STANDARD301  120 365

 RECON ACTIVEDIRMC STANDARD363   75 385
 RECON ACTIVEMC_RECON  STANDARD261   60 365

 STANDARD  ACTIVEDIRMC STANDARD261   60 365
 STANDARD  ACTIVESTANDARD  STANDARD261   60 365


 UNIX  ACTIVEMC_UNIX   STANDARD301   60  30


 I believe that this provides for daily backups for over a month.

 There was a request to have the following:
 1)   Daily backups for a week.
 2)   Weekly backups for a month.
 3)   Monthly backups for a year.

 I believe we are providing 1  2.  We are providing daily backups for a
 month.

 How can I provide monthly backups for a year?
 I know that I could take monthly archives, but this would exceed
 our backup
 windows and would increase our resources ( db, tapes, etc.)
 Also, I know we could lengthen our retention policies.
 Also we could create backup sets. (tons of tapes!)

 How are other people handling this?

 Thanks,


 Marc Levitan
 Storage Manager
 PFPC Global Fund Services




Re: Priming disk pools

2002-04-04 Thread Mr. Lindsay Morris

There is a planned feature - don't know if it's in 5.1 or not - called move
nodedata, that will let you do exactly what you want.  it will probably be
something like
move nodedata node1,node2,... from-pool to-pool
and you'll be able to parallelize that between several tape dirves, or
restore only archives / only backups, or just resotre a particular
filespace.

Sorry I don't know more about delivery date.
-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Kliewer, Vern
 Sent: Thursday, April 04, 2002 10:25 AM
 To: [EMAIL PROTECTED]
 Subject: DR: Priming disk pools


 I am preparing our DR plan and procedures. I have the bulk of it done and
 tested (still have to restore more kinds of clients).

 One of the things that happens in testing, and will likely happen
 during our
 Business Recovery exercises is that I will have time between getting the
 TSM server running and when the clients will be ready to recover. Since I
 hope to have several tape drives available to the TSM recovery box, I was
 wondering if there is any way to prime the disk pools with data from the
 COPYPOOL?

 Also, after all the DRM scripts are done, the disk pools are largely
 disabled. When is it appropriate to enable them?

 Werner Kliewer
 in Winnipeg




Re: TSM Addons

2002-04-04 Thread Mr. Lindsay Morris

David, I suggest you check the banner ads at www.adsm.org (esp. ours ;-} )

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 David E Ehresman
 Sent: Thursday, April 04, 2002 11:07 AM
 To: [EMAIL PROTECTED]
 Subject: TSM Addons


 What TSM add-on commercial programs are available for
 managing/monitoring a TSM server running on AIX?




Re: Technical comparisons

2002-04-03 Thread Mr. Lindsay Morris

Paul Seay wrote an excellent comparison to the list back in January.
http://msgs.adsm.org/cgi-bin/get/adsm0201/770.html
and
http://msgs.adsm.org/cgi-bin/get/adsm0201/734.html

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Jolliff, Dale
 Sent: Wednesday, April 03, 2002 8:12 AM
 To: [EMAIL PROTECTED]
 Subject: Technical comparisons


 Does anyone have a link to some detailed white paper sort of comparisons
 between TSM and the leading competitors in storage management?

 I have a customer specifically asking for comparisons between Veritas and
 Tivoli - and the most recent google search turned up several marketing
 pieces from Veritas and one Gartner comparison on old versions of ADSM/TSM
 (version 3.x)..

 Surely someone else has already invented this wheel?




Re: TOTAL Data on the Backup System

2002-04-03 Thread Mr. Lindsay Morris

See query occupancy, but you'll have to run it through a script to add it
up into one number.
Servergraph/TSM does this daily, and breaks it out by primary/copypool, and
by backups/ archives /HSM data - and also shows you the trend over time for
each of these, so you can see if something's starting to explode.

If you want to write your own script for this, look at
http://www.servergraph.com/techtip.shtml, which should help you get clean
query output.


-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Pétur Eyþórsson
 Sent: Wednesday, April 03, 2002 12:28 PM
 To: [EMAIL PROTECTED]
 Subject: TOTAL Data on the Backup System


 Hi can anyone give me a Select statemnt witch gives the output of all the
 total data on the TSM Backup System.



 Kvedja/Regards
 Petur Eythorsson
 Taeknimadur/Technician
 IBM Certified Specialist - AIX
 Tivoli Storage Manager Certified Professional
 Microsoft Certified System Engineer

 [EMAIL PROTECTED]

  Nyherji Hf  Simi TEL: +354-569-7700
  Borgartun 37105 Iceland
  URL:http://www.nyherji.is




Re: domain ?

2002-04-03 Thread Mr. Lindsay Morris

I recommend using one domain where possible.  I've seen sites that have gone
crazy with domains, making them hard to manage.

I think it's NOT useful to have a OS-specific domains, like one for NT and a
domain for AIX, though I see this done a lot.

It may be useful to have domains for different parts of the business, if you
need to give someone else limited TSM access rights to those nodes.

Another reason for breaking your site up into domains is that some nodes
need different policies than other nodes.   But consider: some FILES on node
X need different policies than other FILES on the same node X.  Use client
option sets or the dsm.opt files to set up these policies (either at the
node level, the file level, or both), not domains.

IMHO, anyway.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Wholey, Joseph (TGA\MLOL)
 Sent: Wednesday, April 03, 2002 12:08 PM
 To: [EMAIL PROTECTED]
 Subject: domain ?


 Is there any reason why I wouldn't create one huge domain and
 have production, development and QA servers backing up to it?
 What are the pros and cons of 1 domain vs. many?  Thanks in advance.

 Regards,
 Joe Wholey
 TGA Distributed Data Services
 Merrill Lynch
 Phone: 212-647-3018
 Page:  888-637-7450
 E-mail: [EMAIL PROTECTED]




Re: TSM Server Sizing

2002-04-03 Thread Mr. Lindsay Morris

I think it's better to make a rough guess, or even throw a dart, then grow
the system as needed.  I've seen sites spend literally months doing NOTHING
except trying to get the sizing just right.  That's misguided effort.

What you DO need is some way to monitor growth rates, so you can predict
when you'll outgrow this or that piece of the puzzle.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Brenda Collins
 Sent: Wednesday, April 03, 2002 11:51 AM
 To: [EMAIL PROTECTED]
 Subject: TSM Server Sizing


 Hi!

 Does anyone have any good tips on sizing a TSM server appropriately?  I do
 have the sizing tool but there is so much manual work in trying
 to populate
 that to get any information, particularly when you don't have
 documentation
 for every environment.  I would welcome some comparisons if you care to
 share.  I will have automated tape libraries at each location
 also, ranging
 from a L180, L700 or Powderhorn.

 I have 5 different sites to size:
 1.) 40 servers - 1.5 tb storage total
 2.) 50 servers - 1 tb storage total
 3.) 300 servers - 11 tb. storage
 4.) 125 servers - 6 tb. storage
 5.) Moving from Mainframe to Unix - and merging two different backup
 environments into one.
  TSM on OS/390 - Current occupancy is 18 TB, TSM database = 18gb
  Upstream - 325 clients

 Thanks in advance for any advice you can offer.

 Brenda Collins
 Storage Administrator
 ING - Americas Infrastructure Services
 (612) 342-3839  (Phone)
 (612)510-0187  (Pager)
 [EMAIL PROTECTED]




Re: switches for DSMADMC command

2002-04-03 Thread Mr. Lindsay Morris

TSM admin ref, Chapter 3- Using the command line interface.
I've spent a long time looking for this, too.

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Joe Cascanette
 Sent: Wednesday, April 03, 2002 4:26 PM
 To: [EMAIL PROTECTED]
 Subject: Re: switches for DSMADMC command


 Thanks, ill check again.

 Joe

 -Original Message-
 From: Andrew Raibeck [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, April 03, 2002 3:44 PM
 To: [EMAIL PROTECTED]
 Subject: Re: switches for DSMADMC command


 In the TSM Administrator's Reference.

 Regards,

 Andy

 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
 Internet e-mail: [EMAIL PROTECTED]

 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.




 Joe Cascanette [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 04/03/2002 12:56
 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:switches for DSMADMC command



 Where can I find more switched for this command??

 Thanks

 Joe

 -Original Message-
 From: MC Matt Cooper (2838) [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 01, 2002 10:08 AM
 To: [EMAIL PROTECTED]
 Subject: Re: NEED HELP CONTROLLING SCRIPT OUTPUT, FIELD LENGTHS


 Ike,
 I got it to work with some help from Dennis Glover.  He sent me an
 example of something that worked.  The big difference was that the -COMMA
 option MUST BE LISTED BEFORE THE MACRO option.   So
 DSMADMC -ID=x -PA=x  -COMMA MACRO DD:D2works fine but
 DSMADMC -ID=x -PA=x  MACRO DD:D2 -COMMA does not!
 Matt




Re: Anyone know how to have TSM script read an input file?

2002-04-02 Thread Mr. Lindsay Morris

We put this feature (user notification of failed backups) into our
Servergraph/TSM product a while ago.  Some gotchas you might want to think
about, if you're developing it yourself:

  --for laptops, you don't want to send this notification immediately. Maybe
the user is out of town for a few days.  So each node needs a delay-time:
for critical servers, delay zero days; desktops, 2 days; laptops, 7 days, or
something like that.

  --the notification ought to give the user a way to fix the easy problems
(e.g., is the scheduler running? etc.)

  --if the situation continues for too long, an escalation message ought to
be sent to you, the TSM administrator.

  --you want to ignore antique filespaces, like the one containing an old
retired PC that the user keeps around just in case I need a file from it.

Just food for thought. Maybe you don't need these features.


Mr. Lindsay Morris
CEO
Applied System Design
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Jane Bamberger
 Sent: Tuesday, April 02, 2002 3:09 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Anyone know how to have TSM script read an input file?


 - Original Message -
 From: MC Matt Cooper (2838) [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, April 02, 2002 11:18 AM
 Subject: Anyone know how to have TSM script read an input file?


  Hello all,
  In the life of ever changing requirements... I must
 include in an
  e-mail sent to a PC user that they missed a backup, exactly how long it
 has
  been since they were backed up.
  I am scanning the log to see what schedules were
 MISSED.  But
  now for each node that was missed I need to know the last date of a
 backup,
  not a 'accessed by TSM' (which is in the NODE DB) .  I see that the
 SUMMARY
  DB has ALL the backups start and end dates.  But what would be
 the logical
  way to approach this?  I would think list off the nodes that
 failed and do
  some sort of select of the SUMMARY DB to find the last
 successful backup.
  1) I am not sure how to ask for JUST the last successful backup
 of 1 node.
  2) I am truly at a loss to figure out how to have this script read in a
 list
  of nodes to find out there last successful backups
 
  Has anyone done anything like this?  An example could take me a
 long way.
  I am using TSM 4.1.5 on z/OS 1.1.   If I can get all the info
 into records
 I
  am using SAS to format the e-mail messages.   My TSM system has
 about 230
  nodes in an 80% full 25GB data base.  Hopefully I can do this in an
  efficient manner...
  Thanks in advance for all your help!
  Matt
 




Re: NEED HELP CONTROLLING SCRIPT OUTPUT, FIELD LENGTHS

2002-04-01 Thread Mr. Lindsay Morris

We have a short article and a couple of scripts that will help you clean up
TSM query output so you can work on it with grep / awk / spreadsheets, etc.
See
http://www.servergraph.com/techtip.shtml



Mr. Lindsay Morris
CEO
Applied System Design
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 MC Matt Cooper (2838)
 Sent: Monday, April 01, 2002 10:08 AM
 To: [EMAIL PROTECTED]
 Subject: Re: NEED HELP CONTROLLING SCRIPT OUTPUT, FIELD LENGTHS


 Ike,
 I got it to work with some help from Dennis Glover.  He sent me an
 example of something that worked.  The big difference was that the -COMMA
 option MUST BE LISTED BEFORE THE MACRO option.   So
 DSMADMC -ID=x -PA=x  -COMMA MACRO DD:D2works fine but
 DSMADMC -ID=x -PA=x  MACRO DD:D2 -COMMA does not!
 Matt

 -Original Message-
 From: Ike Hunley [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, March 28, 2002 5:44 AM
 To: [EMAIL PROTECTED]
 Subject: Re: NEED HELP CONTROLLING SCRIPT OUTPUT, FIELD LENGTHS

 Matt,

 I have not found a way to get my data on one line either.  I
 input the data
 into REXX code to reformat it the way I want.  What would you like your
 output to look like? I could send you a REXX exec.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 MC Matt Cooper (2838)
 Sent: Wednesday, March 27, 2002 2:25 PM
 To: [EMAIL PROTECTED]
 Subject: Re: NEED HELP CONTROLLING SCRIPT OUTPUT, FIELD LENGTHS



 I have tried everyone's suggestions.  I believe the fact that I
 have one  AS
 statement in the script that it will ignore the attempts of fixing the
 output into a 1 line per node output.  The -COMMAdelimited ,
 -TABdelimited,
 -OUT didn't work.   I was able to get a 1 line output, with a
 title line and
 some other 'extra' lines of TSM header and msgs.   The only way I seem to
 have in controlling this is with the suggestion from Paul, using an AS 
 big area   .   This whole 'programming' area seems to go by a lot of
 undocumented rules.  Why don't they document them somewhere?  Or give a
 direct reference to which other product doc to look at?   I am able to
 directly control the length of a numeric output with 'decimal(xx)'
 statement.  What about character output?   I am still disappointed that I
 can not seem to get an output that would be nothing more than
 what I really
 want 12 character node name delimiter 4 digit number of days since last
 access  delimiter   contact field name of 20 characters,then the next
 line  . The best I came up with is as follows.
 SCRIPT...

 select node_name as NODE, -
   cast((current_timestamp-lastacc_time)days as decimal(4)) as DAYS, -
  contact asCONTACTfrom nodes where -
   cast((current_timestamp-lastacc_time)days as decimal) = 7

 OUTPUT FILE .

 ADSTAR Distributed Storage Manager

 Command Line Administrative Interface - Version 3, Release 1, Level 0.7

 (C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved.



 ANS8000I Server command: 'select node_name as NODE,
 cast((current_timestamp-


 NODEDAYSCONTACT

 -- --
 
 AG570  58 Elliott/desktop

 DEFIANT  12 R.Schulte, D.Harrison

 DSS1OLD343 Connie Brooks

 WIN2KAD 34 Karlene Michael

 ANS8000I Server command: 'COMMIT'



 ANS8002I Highest return code was 0.


 -Original Message-
 From: Seay, Paul [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 27, 2002 11:13 AM
 To: [EMAIL PROTECTED]
 Subject: Re: NEED HELP CONTROLLING SCRIPT OUTPUT, FIELD LENGTHS

 That will help, but you can also do the AS [ lots of spaces ]
 to lengthen
 the output field.  What we really need is a set displaymode=fixedraw.

 This is an example of something that I do:

 select stgpool_name as Storage Pool Name ,
 cast(sum(est_capacity_MB*pct_utilized/100/1024) as decimal(7,3)) as Total
 GB in Pool, cast(avg(est_capacity_MB*pct_utilized/100/1024) as
 decimal(7,3)) as AVG GB / Tape, cast(count(volume_name) as decimal(4,0))
 as Tapes  from volumes where stgpool_name like 'CPY%' or
 stgpool_name like
 'TAPE%' group by stgpool_name

 Notice that he stgpool_name field is lenghtened to prevent the wrap by
 adding the spaces.

 -Original Message-
 From: Rejean Larivee/Quebec/IBM [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 27, 2002 10:46 AM
 To: [EMAIL PROTECTED]
 Subject: Re: NEED HELP CONTROLLING SCRIPT OUTPUT, FIELD LENGTHS


 Hello Matt,
 remove the -TAB and use -DISPLAY=LIST instead.
 I believe this is what you are looking for.

 -
 Rejean Larivee
 IBM TSM/ADSM Level 2 Support




   MC Matt Cooper

Re: Is there a way to email notifications in TSM?

2002-03-22 Thread Mr. Lindsay Morris

Just want to set the record straight before misconceptions spread:

1. Servergraph/TSM does a GREAT DEAL MORE than send email;

2. For a single TSM server, the price is MUCH less than what John was quoted
for his large and complex site;

3. Every TSM-er has to build a monitoring system just to survive, and
indeed, there are fairly easy ways to do this.  But it's hard to show TRENDS
OVER TIME, like Servergraph does. And that's IMPORTANT. Information without
context is often useless.

For example, maybe your CPU is at 95% for the last 5 minutes. Is that a
problem, or not? You don't know. Maybe it's just a momentary spike.  You
need to see it all through last night's backups, and today's migration /
expiration / etc, to decide if you really need to buy more CPU horsepower.

And it's the same for every other measurement: NIC utilization, tape library
speed, client throughput...  You can't make decisions without some context.


So: it's not THAT expensive, and it's not something you can easily write.
We spent a lot of time building Servergraph/TSM, mainly to help promote an
excellent product, Tivoli Storage Manager.

Just to set the record straight.
Thanks.


John Bremer wrote:
 You're right there are some expensive solutions.  I believe ServerGraph
has
 the e-mail notification capability, but in our environment we were quoted
 $35,000 for the initial installation, plus 15% maintenance.  TDS for SMA
 does not have e-mail notice capablity.

 If you check the ADSM.org archive you will find historical responses to
 this query which give details on query the activity logs and generating
 your own e-mail notices.

 We use a DB2 database loaded every morning after backups, which we query
 and generate mail from.

 So I recommend home-grown the best, least expensive, solution.




Mr. Lindsay Morris
CEO
Applied System Design
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



Re: Select Title Lines: NOT

2002-02-17 Thread Mr. Lindsay Morris

We have a tech tip on this issue, with a couple of helpful scripts, at
http://www.servergraph.com/techtip.shtml.


Mr. Lindsay Morris
CEO
Applied System Design
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Seay, Paul
 Sent: Sunday, February 17, 2002 1:11 PM
 To: [EMAIL PROTECTED]
 Subject: Select Title Lines: NOT


 Has anyone figured out how to stop the title lines from coming out on when
 you issue a select statement?

 A common thing to do is pipe the output to a file to post process with a
 program.  It is a pain to have code around extraneous data.

 Paul D. Seay, Jr.
 Technical Specialist
 Naptheon, INC
 757-688-8180




Re: Backup Sets for Long Term Storage

2002-02-13 Thread Mr. Lindsay Morris

Some people have worried that their 7-year archive tapes might only have a
5-year shelf life.
It seems to me that reclamation would, over the years, do enough
tape-to-tape copies to detect when a tape was going bad.
Then you would presumably move data off the bad tape, discard it, and your
archive would be safe on a good tape.

But with backupsets, there's no reclamation, so this wouldn't happen.

A small concern IMHO. I just wanted to muddy the waters a bit.  ;-}


Mr. Lindsay Morris
CEO
Applied System Design
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Haskins, Mike
 Sent: Wednesday, February 13, 2002 3:06 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Backup Sets for Long Term Storage


 Tom, your last comment is actually the reason I was considering backup
 sets as a top contender for long term storage.  Generate a backup set,
 the owner signs for the tapes, and they're gone -- reserving library
 space and volume ranges for data that is actively used or needed for DR.

 The inability to move a backup set to a new generation of media, as Bill
 noted, is something I hadn't considered!

 Mike Haskins
 Agway, Inc


 -Original Message-
 From: Kauffman, Tom [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 13, 2002 1:20 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Backup Sets for Long Term Storage


 Mike, if I were going to do this I'd use DLT based upon the
 manufacturer's
 propa\ documentation.

 OTOH, here's what I've done:

 1) set up archive copygroups with retentions of 1 year through 7 years
 (seven groups) all pointed to the same storage pool chain (disk and
 tape).
 2) treat the storage just like everything else -- one copy on-site, and
 a
 copy pool for off-site.

 I run reclaims as required and otherwise exercise the LTO media once or
 twice a month.

 If I were to do the backup set process, I'd make bloody sure that the
 owner
 of the data had the tapes AND HAD SIGNED FOR THEM so if they got lost or
 damaged I wouldn't be in the loop.

 Tom Kauffman
 NIBCO, Inc

  -Original Message-
  From: Haskins, Mike [mailto:[EMAIL PROTECTED]]
  Sent: Tuesday, February 12, 2002 7:10 PM
  To: [EMAIL PROTECTED]
  Subject: Backup Sets for Long Term Storage
 
 
  Our TSM server has a 3494 library with 3590 tape drives.  Now
  faced with
  meeting long term storage requirements (7+ years), I am looking at
  generating backup sets to accomplish this.  Since backup sets can be
  used for stand-alone restores from a backup-archive client, I am
  thinking that a different media type would be better than
  3590.  There's
  not much chance that many of my nodes could have access to a
  3590 drive.
  DLT or 8mm seem more appropriate.  Any experiences or
  opinions would be
  appreciated.
 




Re: TSM and capacity planning

2002-01-23 Thread Mr. Lindsay Morris

Justin, take a look at this node-by-node data table:
http://www.servergraph.com/gallery/adsm.Node_Data.Sortable_Node_Data.html

The point is this:
query contents will indeed be a very expensive query to run.
query occupancy can break out TSM storage used by backup, archive, and
HSM storage.
query filespace can show you local storage used (i.e. on the node's
local disks, not TSM storage) by the node.

The two sets of data are synergistic: for example, if node X is using 5 GB
locally (from q files), but has somehow managed to amass store 100GB of
TSM storage (from q occ), then it has a hog_factor of 20 - it's using 20
times more space on TSM than it owns locally!

You should do this calculation on all your nodes, and sort them by
hog_factor.  There always seem to be a couple that are exorbitant.

For such nodes, you'd immediately want to see if they are archiving
everything weekly, or if they have very aggressive backup policies.
Separating backup storage from archive storage (with q occ f=d) shows you
instantly which is which (though the table above, unfortunately, is not
configured to show that).

You may want to take advantage of this when you start writing Perl scripts.

Cheers, and good luck.


Mr. Lindsay Morris
CEO
Applied System Design
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Justin Derrick
 Sent: Wednesday, January 23, 2002 1:36 PM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM and capacity planning


 My approach is very similar, but probably a little more database
 intensive...

 dsmadmc -id=uid -password=pass -outfile=tsmrpt -tabdelimited select
 filespace_name,sum(file_size) as \File Size\ from contents group by
 filespace_name

 The resulting file should be able to be parsed easily and imported into
 Excel for down-to-the-byte accuracy on how large each of your filespaces
 are.  I'm going to look at this a little more closely when I get home next
 week, and maybe write a little Perl script to help organize it into
 something meaningful.

 -JD.

 Hehe... When using SQL BackTrack, it isn't real comforting to look at all
 those filespaces (query filespace) with big goose eggs next to them.  One
 way to do it would be to not view the filespaces to calculate how much is
 being backed up on a node but to use a select statement:
 
 select stgpool_name, sum(physical_mb) from occupancy where node_name
 ='NODE' group by stgpool_name
 
 Querying the files spaces will give you a capacity based on an
 estimate and
 it will take a while to compute the percentage
 utilized anyway.
 
 George Lesho
 Storage/System Admin
 AFC Enterprises
 
 
 
 
 
 
 MacMurray, Andrea (CC-ETS Ent Storage Svcs)
 [EMAIL PROTECTED]@VM.MARIST.EDU on 01/22/2002 11:40:46
 AM
 
 Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
 
 Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]
 
 
 To:   [EMAIL PROTECTED]
 cc:(bcc: George Lesho/Partners/AFC)
 Fax to:
 Subject:  TSM and capacity planning
 
 
 
 Hi everybody,
 I have been tasked with capacity planning for TSM. We are running on AIX
 4.3.3 TSM version 4.2.1.9. Most of the information is in the log
 with some
 scripting you can all this ata into a database. The problem I have is SQL
 backtrack. All those database instances backup to tsm without leaving any
 stats, I do know where the backtrack logs are, but they are not
 very pretty
 . So my question is if there is anybody out there who had to do something
 like that before.
 Thanks in advance
 
 
 Andrea Mac Murray
 Sen. Systems Administrator
 ConAgra Foods, Inc.
 7300 World Communication Drive
 Omaha,NE 68122
 Tel: (402) 577-3603
 [EMAIL PROTECTED]




Re: TSM and capacity planning

2002-01-23 Thread Mr. Lindsay Morris

Well, we love to support the TSM community, and are happy to share some of
the utility scripts --
but these three depend on having our database filled with info.

You can use the demo free for 30 days though, and that may be enough to
solve your problems, at least in the short term.
Call me for more info, or see our banner ad at www.adsm.org - this is not
supposed to be a marketing list.

Thanks, Jane.

Mr. Lindsay Morris
CEO
Applied System Design
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Jane Bamberger
 Sent: Wednesday, January 23, 2002 3:51 PM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM and capacity planning


 Hi Lindsay,

 Any chance you would share makegraph, mknode, and a sample of the
 conf file?
 This looks like something our hospital could use!

 Jane

 %%
 Jane Bamberger
 Bassett Healthcare
 TSM 4.2.1.9
 AIX 4.3.3

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Mr. Lindsay Morris
 Sent: Wednesday, January 23, 2002 2:32 PM
 To: [EMAIL PROTECTED]
 Subject: Re: TSM and capacity planning


 Justin, take a look at this node-by-node data table:
 http://www.servergraph.com/gallery/adsm.Node_Data.Sortable_Node_Data.html

 The point is this:
 query contents will indeed be a very expensive query to run.
 query occupancy can break out TSM storage used by backup,
 archive, and
 HSM storage.
 query filespace can show you local storage used (i.e. on the node's
 local disks, not TSM storage) by the node.

 The two sets of data are synergistic: for example, if node X is using 5 GB
 locally (from q files), but has somehow managed to amass store 100GB of
 TSM storage (from q occ), then it has a hog_factor of 20 -
 it's using 20
 times more space on TSM than it owns locally!

 You should do this calculation on all your nodes, and sort them by
 hog_factor.  There always seem to be a couple that are exorbitant.

 For such nodes, you'd immediately want to see if they are archiving
 everything weekly, or if they have very aggressive backup policies.
 Separating backup storage from archive storage (with q occ f=d) shows you
 instantly which is which (though the table above, unfortunately, is not
 configured to show that).

 You may want to take advantage of this when you start writing
 Perl scripts.

 Cheers, and good luck.

 
 Mr. Lindsay Morris
 CEO
 Applied System Design
 www.servergraph.com
 859-253-8000 ofc
 425-988-8478 fax



  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Justin Derrick
  Sent: Wednesday, January 23, 2002 1:36 PM
  To: [EMAIL PROTECTED]
  Subject: Re: TSM and capacity planning
 
 
  My approach is very similar, but probably a little more database
  intensive...
 
  dsmadmc -id=uid -password=pass -outfile=tsmrpt -tabdelimited select
  filespace_name,sum(file_size) as \File Size\ from contents group by
  filespace_name
 
  The resulting file should be able to be parsed easily and imported into
  Excel for down-to-the-byte accuracy on how large each of your filespaces
  are.  I'm going to look at this a little more closely when I
 get home next
  week, and maybe write a little Perl script to help organize it into
  something meaningful.
 
  -JD.
 
  Hehe... When using SQL BackTrack, it isn't real comforting to
 look at all
  those filespaces (query filespace) with big goose eggs next to
 them.  One
  way to do it would be to not view the filespaces to calculate
 how much is
  being backed up on a node but to use a select statement:
  
  select stgpool_name, sum(physical_mb) from occupancy where node_name
  ='NODE' group by stgpool_name
  
  Querying the files spaces will give you a capacity based on an
  estimate and
  it will take a while to compute the percentage
  utilized anyway.
  
  George Lesho
  Storage/System Admin
  AFC Enterprises
  
  
  
  
  
  
  MacMurray, Andrea (CC-ETS Ent Storage Svcs)
  [EMAIL PROTECTED]@VM.MARIST.EDU on
 01/22/2002 11:40:46
  AM
  
  Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
  
  Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]
  
  
  To:   [EMAIL PROTECTED]
  cc:(bcc: George Lesho/Partners/AFC)
  Fax to:
  Subject:  TSM and capacity planning
  
  
  
  Hi everybody,
  I have been tasked with capacity planning for TSM. We are
 running on AIX
  4.3.3 TSM version 4.2.1.9. Most of the information is in the log
  with some
  scripting you can all this ata into a database. The problem I
 have is SQL
  backtrack. All those database instances backup to tsm without
 leaving any
  stats, I do know where the backtrack logs are, but they are not
  very pretty
  . So my question is if there is anybody out there who had to
 do something
  like that before.
  Thanks in advance
  
  
  Andrea Mac Murray
  Sen. Systems Administrator
  ConAgra Foods

Re: Handling spikes in storage transfer

2002-01-14 Thread Mr. Lindsay Morris

Get the client's dsmsched.log file (best way);
or query the contents table (slow!).

Is there some reason why you can't ftp over the client's dsmsched.log, so
you can look at it?


Mr. Lindsay Morris
CEO
Applied System Design
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
 Zoltan Forray/AC/VCU
 Sent: Monday, January 14, 2002 2:56 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Handling spikes in storage transfer


 Thanks, but, I am looking for file-level information.

 I have the bytes transfered per node info from the SMF ACCOUNTING
 records generated by the server. I can tell HOW MANY BYTES WERE TRANSFERED
 for this node, back to the day it first started talking to TSM. I need to
 know what files and how big.
 --
 --
 Zoltan Forray
 Virginia Commonwealth University - University Computing Center
 e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807




 Denis L'Huillier [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 01/14/2002 02:33 PM
 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: Handling spikes in storage transfer


 Try,
 select  sum(bytes) from summary where entity='SuspectedAbuser' and
 activity
 ='BACKUP' and start_time between '2002-01-13 00:00' and '2002-01-14 00:00'

 This will give you in bytes the amount of data  a particular node sent to
 the server in a 24 hour period.
 Adjust the start_time and entity as required.

 You can also do a
 select entity, sum(bytes) blah blah blah blah group by entity
 This will give you the bytes for all nodes.

 Hope this helps.

 Regards,

 Denis L. L'Huiller
 973-360-7739
 [EMAIL PROTECTED]
 Enterprise Storage Forms -
 http://admpwb01/misc/misc/storage_forms_main.html



 Zoltan
 Forray/AC/VCUTo: [EMAIL PROTECTED]
 zforray@VCU.cc:
 EDU Subject: Re: Handling spikes
 in storage transfer
 Sent by:
 ADSM: Dist
 Stor Manager
 [EMAIL PROTECTED]
 RIST.EDU


 01/14/2002
 11:02 AM
 Please
 respond to
 ADSM: Dist
 Stor Manager






 I already tried that. The information it gives isn't detailed enough. It
 just tells me about the filespaces.

 I need to know specifics, such as the names/sizes of the files/objects in
 the file spaces.

 Anybody have any sql to do such ?

 Thanks, anyway !
 --
 --

 Zoltan Forray
 Virginia Commonwealth University - University Computing Center
 e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807




 Cook, Dwight E (SAIC) [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 01/14/2002 10:29 AM
 Please respond to ADSM: Dist Stor Manager


 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: Handling spikes in storage transfer


 Do a q occ nodename and look for what file systems are out on your
 diskpool in great quantity.
 That is, if you send all data first to a diskpool and then bleed it off to
 tape (daily).
 That will give you an idea of what file systems are sending the most data,
 currently.
 Then you may perform something like a show version nodename
 filespacename to see each specific item.
 you might note the filecount in the q occ listing to see how
 much will be displayed in the show version command.

 hope this helps...
 later,
 Dwight

 -Original Message-
 From: Zoltan Forray/AC/VCU [mailto:[EMAIL PROTECTED]]
 Sent: Monday, January 14, 2002 8:47 AM
 To: [EMAIL PROTECTED]
 Subject: Handling spikes in storage transfer


 I have an SGI client node that while normally sends = 1.5GB of data, is
 now sending 36GB+.

 Without accessing the client itself, how can I find out what is causing
 this increase in TSM traffic ?

 I have contacted the client owner, but their response is taking too long
 and this spike it wreaking havoc on TSM.

 --
 --

 
 Zoltan Forray
 Virginia Commonwealth University - University Computing Center
 e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807




  1   2   >