EXPORT-IMPORT clients from OS390 Server to Unix Server?

2001-07-27 Thread Glass, Peter

Hi All!
Is it possible to EXPORT client data from an OS390 Server to a Unix Server?
The Tivoli docs all seem to say yes -- provided that the target Server is
running the same or newer Server level code.
However, a hardware vendor pointed out to me that this may not be possible:
If the OS390 Server writes the EXPORT data to MVS tape devices in either
EBCDIC or ASCII format, the target Server may not be able to read the tapes
for the IMPORT. Because the target Server is configured with SCSI-attached
devices, the data on the tapes must also be in SCSI format, otherwise the
target Server will not be able to read them.
Any truth to this theory?
Thanks, in advance.

Peter Glass
Distributed Storage Management (DSM)
Wells Fargo Services Company
> * 612-667-0086  * 866-249-8568
> * [EMAIL PROTECTED]
>



Getting accounting log out of SMF

2001-07-27 Thread Kai Hintze

Please help a hapless unix administrator thrust into managing TSM on a
mainframe. I've been running ADSM on AIX for years, but after a merger I am
suddenly trying to work outside may current knowledge base. I'm trying to
view the accounting log to get some client histories. My mainframe contact
says they are in SMF and we don't have a report for it.

1) What is SMF? Why do I need a report?

2) Does anyone have a report that would let me dump the accounting logs to a
text file where I can analyze them?

Thanks!

Kai.

"If you find a path with no obstacles, it probably doesn't lead anywhere."
-- Franklin Clark



Re: Tape volume list ?

2001-07-27 Thread Alex Paschal

This is actually a good way to do things.  Your volume use stats in TSM are
discarded when a volume is set back to scratch, but if it's defined to a
stgpool, you can track its number of usages, number of errors, etc., over
time.

Alex

-Original Message-
From: David Longo [mailto:[EMAIL PROTECTED]]
Sent: Thursday, July 26, 2001 10:29 AM
To: [EMAIL PROTECTED]
Subject: Re: Tape volume list ?


I agree with the other comments mode so far on this, you can't keep
track of ALL volumes.  There is one thing you can do to help some
though.  Instead of having just a batch of SCRATCH volumes, you
can assign emopty violumes to a specific STGPOOL with the DEF VOL
command.  I haven't used but I believe they won't go back to scratch
when empty and you may be able to query them with Q commands.

This would only be if you can reasonably know how many volumes are
needed for each of your SEQ STG POOLS.  You would still need some
as SCRATCH for DB BACKUPS, I don't think they can be assigned.

Just another thought.

David Longo

"WorldSecure " made the following
 annotations on 07/27/01 15:36:57
--

[INFO] -- Content Manager:
The information contained in this communication is confidential and intended solely 
for the use of the individual to whom it is addressed and others authorized to receive 
it.  If you are not the intended recipient, any disclosure, copying, distribution or 
taking of any action in reliance on the contents of this information is prohibited. If 
you have received this communication in error, please immediately notify the sender by 
phone if possible or via email message.

==



Re: Residency Announcement: ST-1201 Implementing IBM LTO Tape in Linux and Windows Environments

2001-07-27 Thread Charlotte Brooks


Looks like people on this list are starting to use LTO tape drives - here's
a residency opportunity to further your experience and work with TSM plus
ARC-Serve in Intel environments (Windows and Linux).

Regards, Charlotte
Project Leader, Tape and Storage Management solutions, ITSO Almaden
Email: [EMAIL PROTECTED] Ph: (408) 927-3641 Fax: (408) 927-3616 T/L:
8-457-3641
http://ibm.com/redbooks

(Embedded image moved to file: pic15890.pcx)

From: Redbooks at IBM on 07/20/2001 03:48 AM EDT

To:   ITSO Mailing List
cc:(bcc: Charlotte Brooks/Almaden/IBM)
Subject:  Residency Announcement:  ST-1201 Implementing IBM LTO Tape in
  Linux and Windows Environments

Residency Announcement:  ST-1201 Implementing IBM LTO Tape in Linux and
Windows Environments

Benefits to Resident:  The residents will gain hands-on experience in the
ITSO's SAN Lab with our Ultrium 3583 Tape Library in Linux and Windows
environments. They will implement and document use of these libraries with
standard operating system utilities as well as popular backup software such
as Tivoli Storage Manager and CA-ARCServe.

This San Jose residency begins 15 Oct 2001, ends 16 Nov 2001 (5 weeks), and
requires 4 residents.  Please submit nominations by 31 Aug 2001.



Description:  The IBM Ultrium (LTO) tape drives may be attached to Linux,
Windows NT, and Windows 2000 servers. This project will deliver an IBM
Redbook giving details of how to do this and providing practical examples.
This redbook will describe how the attachment support is provided and what
connection methods are supported, and will illustrate the text with
practical examples. It will include coverage of setting up the LTO tape
drives and libraries with popular backup applications, including Tivoli
Storage Manager and CA-ARCServe. This is a companion residency to
ST-1200-R, Implementing LTO in UNIX environments (AIX, Solaris, HP-UX). In
addition to the redbook, the output of this and the ST-1200 residency will
be used to create an IBM Learning Services course on Implementing LTO
drives and libraries in Windows, Linux, and other UNIX environments.
Residents will produce material for this course in addition to the redbook.

Objectives:  Topics to be covered include:
1. Attachment methods (SCSI LVD, HVD; SAN and FC-AL) and when to choose
them
2. Software fundamentals (device drivers, addressing, operating system
tools and utilities)
3. Specific examples for Linux
4. Specific examples for Windows NT and 2000
5. Tivoli Storage Manager examples
6. ARCServe examples




This residency is suitable for Customers, Business Partners, and IBMers.

A basic requirement for all residents is the ability to read and clearly
express concepts and procedures in common English.

Resident Prerequisites:  This project needs residents with a mixture of
skills in systems administration (for Windows and Linux), storage
implementation (especially with LTO tape) and popular backup/restore
software. All residents need to have the ability to write clearly in
English on technical subjects - please send some examples of your technical
documentation to the project leader.

Skill levels below are from 5 to 1, where 5 = expert skill and 1 = no
skill.

You personally do not have to have all of these skills to be considered a
good candidate for this residency.  The project leader will match your
skills with those of the other nominations to put together the best team
possible.

Skills Needed:

|--+-|
|Skill Area|Skill|
|  |Level|
|--+-|
|Written and spoken English|  4  |
|--+-|
|Tape attachment for Linux (RedHat |  4  |
|and others)   | |
|--+-|
|Tape attachment for Windows (NT   |  4  |
|and 2000) | |
|--+-|
|Tivoli Storage Manager|  4  |
|--+-|
|CA-ARCServe   |  4  |
|--+-|
|LTO tape implementation   |  4  |
|--+-|




Documentation Skills Needed:

|--+-|
|Documentation Skill Area  |Skill|
|  |Level|
|--+-|
|Word processing packages (Word or |  4  |
|WordPro)  | |
|--+-|
|Graphics packages (Freelance or   |  4  |
|PowerPoint)   | |
|--+-|
|FrameMaker|  2  |
|--+-|




These documentation skills are desirable, but not critical, for acceptance
into this residency.  The only definite requirement is a readiness to learn
and to handle your own data entry needs.



Nominees must have the appro

Re: List of commands?

2001-07-27 Thread Alex Paschal

Or from your WEB command line, type HELP.

Alex

-Original Message-
From: Matthew Large [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 25, 2001 2:01 AM
To: [EMAIL PROTECTED]
Subject: Re: List of commands?


Hi Francisco,

You could start with the TSM Admin Reference for your level (3.7, 4.1/2).
This will provide you with all supported commands for the TSM Server. For
the client, just type in 'help' at the >tsm commands line, and you get..
help! :)

OR, there is this link http://people.bu.edu/rbs/ADSM.QuickFacts with just
about every command you could ever need.


Hope this helps!

-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: 25 July 2001 06:32
To: [EMAIL PROTECTED]
Subject: List of commands?


Is there a document/URL with a list of all commands?
Ever time I use the web GUI I enable the command line and do some commands
I am familiar with. I was wondering where I could get a list so I can do
more comnands that way while still been able to use the web GUI for
whichever functions the web gui is better suited.


http://www.phoenixitgroup.com
**Internet Email Confidentiality Footer***

Phoenix IT Group Limited is registered in England and Wales under company
number 3476115.  Registered Office: Technology House, Hunsbury Hill Avenue,
Northampton, NN4 8QS

Opinions, conclusions and other information in this message that do not
relate to the official business of our firm shall be understood as neither
given nor endorsed by it.

No contracts may be concluded on behalf of our firm by means of email
communications.

Confidentiality: Confidential information may be contained in this message.
If you are not the recipient indicated (or responsible for delivery of the
message to such person), you may not take any action based on it, nor should
you copy or show this to anyone; please reply to this email and highlight
the error to the sender, then delete the message from your system.

Monitoring of Messages: Please note that we reserve the right to monitor and
intercept emails sent and received on our network.
Warning:  Internet email is not 100% secure. We ask you to understand and
observe this lack of security when emailing us. We do not accept
responsibility for changes made to this message after it was sent

Viruses: Although we have taken steps to ensure that this email and any
attachments are free from any virus, we advise that in keeping with good
computing practice the recipient should ensure they are actually virus free.

"WorldSecure " made the following
 annotations on 07/27/01 15:21:06
--

[INFO] -- Content Manager:
The information contained in this communication is confidential and intended solely 
for the use of the individual to whom it is addressed and others authorized to receive 
it.  If you are not the intended recipient, any disclosure, copying, distribution or 
taking of any action in reliance on the contents of this information is prohibited. If 
you have received this communication in error, please immediately notify the sender by 
phone if possible or via email message.

==



Re: Considerable difference in copy pool size and onsite ( 357pool ) si ze

2001-07-27 Thread Alex Paschal

Joe,

The q stg shows the estimated capacity of the stgpools based on estimated
maximum capacity of the media defined in the devclass and the number of
possible scratch volumes.  There's more exact data on that in the Admin
Guide.  To find the real amount of data in your stgpools, you should
probably use the q occupancy command.

Alex

-Original Message-
From: Joe Cascanette [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 23, 2001 11:18 AM
To: [EMAIL PROTECTED]
Subject: Considerable difference in copy pool size and onsite (357pool)
si ze


I have been trying to figure out why there is a great difference in size
between my offsite and onsite tape copies. Right now my onsite (357pool -
main domain) has approx 2,350 gigs of stored information. My copy pool tapes
(which are reclaimed daily - set to recl=55%) are sitting at
4,200gigs...Almost doubled!!!..All of this information is obtained when
entering command "q stg"


Is there a way to expire all the copy pool tapes, bring them back from
offsite (as scratches) and create a new set of offsite tapes within a
respectable amount of time? or is there a process that I am not performing
to fix this problem.

Current config:

NT 4.0 sp6a
TSM v4.1.3
3575 IBM Tape Library - 2 drives - 120 max tapes.

Thanks

Joe Cascanette

"WorldSecure " made the following
 annotations on 07/27/01 15:08:01
--

[INFO] -- Content Manager:
The information contained in this communication is confidential and intended solely 
for the use of the individual to whom it is addressed and others authorized to receive 
it.  If you are not the intended recipient, any disclosure, copying, distribution or 
taking of any action in reliance on the contents of this information is prohibited. If 
you have received this communication in error, please immediately notify the sender by 
phone if possible or via email message.

==



non-IBM 8mm Tape Drive

2001-07-27 Thread Pearson, Dave

Hello

Does anyone uses the non-IBM 8mm tape drives?  If you do, what make and
model and how is the performance?

Thanks

Dave



Re: Hand-held 3590 barcode reader?

2001-07-27 Thread Allen Barth

We're using some from 'Symbol' and they hook into an interface which then
uploades data to a CA product on a pc

Al Barth
Zurich Scudder Investments



David Longo
  cc:
Sent by: "ADSM: DistSubject: Re: Hand-held 3590 
barcode reader?
Stor Manager"
<[EMAIL PROTECTED]
U>


07/27/01 02:15 PM
Please respond to
"ADSM: Dist Stor
Manager"






Richard,

There was some discussion about this here a few months ago.
I seem to remember one or more people had one.

David Longo

>>> [EMAIL PROTECTED] 07/27/01 03:08PM >>>
Anyone have a recommendation for hand-held 3590 barcode reader?
Our Operations group is handling a lot of tapes and I thought
it would facilitate things for them to be able to simply scan
the barcode on the 3590 tapes rather than manually record numbers.
(I'd also need either accompanying interface software or specs
for writing some.)

  thanks,  Richard Sims, Boston University OIT



"MMS " made the following
 annotations on 07/27/01 15:20:28
--

This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.  If
you receive this message in error, please immediately delete it and all
copies of it from your system, destroy any hard copies of it, and notify
the sender.  You must not, directly or indirectly, use, disclose,
distribute, print, or copy any part of this message if you are not the
intended recipient.  Health First reserves the right to monitor all e-mail
communications through its networks.  Any views or opinions expressed in
this message are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of a particular
entity;  and (2) the sender is authorized by the entity to give such views
or opinions.

==



Re: Hand-held 3590 barcode reader?

2001-07-27 Thread David Longo

Richard,

There was some discussion about this here a few months ago.
I seem to remember one or more people had one.

David Longo

>>> [EMAIL PROTECTED] 07/27/01 03:08PM >>>
Anyone have a recommendation for hand-held 3590 barcode reader?
Our Operations group is handling a lot of tapes and I thought
it would facilitate things for them to be able to simply scan
the barcode on the 3590 tapes rather than manually record numbers.
(I'd also need either accompanying interface software or specs
for writing some.)

  thanks,  Richard Sims, Boston University OIT



"MMS " made the following
 annotations on 07/27/01 15:20:28
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Hand-held 3590 barcode reader?

2001-07-27 Thread Richard Sims

Anyone have a recommendation for hand-held 3590 barcode reader?
Our Operations group is handling a lot of tapes and I thought
it would facilitate things for them to be able to simply scan
the barcode on the 3590 tapes rather than manually record numbers.
(I'd also need either accompanying interface software or specs
for writing some.)

  thanks,  Richard Sims, Boston University OIT



Re: Admin changes in last 2 years?

2001-07-27 Thread Andrew Raibeck

I don't know about Mac, but for NT and 2000, you can use the FINDSTR
command, which includes some grep-like behavior. For example:

   grep -i fail sched.log

could be written as:

   findstr /i /c:fail sched.log

A weaker version of FINDSTR, FIND, is also available. The same command
would be written like this:

   find /i "fail" sched.log

Just as with grep, you can pipe the output to another program.

I work mainly with NT and 2000, so I usually prefer FINDSTR. I don't
believe that FINDSTR is implemented in Win9X, but I'm pretty sure that
FIND is implemented there.

Also, check out the event logging facilities on the server. You can read
about this in the Administrator's Guide, in the chapter on monitoring the
TSM server.

Regards,

Andy

Andy Raibeck
IBM Tivoli Systems
Tivoli Storage Manager Client Development
e-mail: [EMAIL PROTECTED]
"The only dumb question is the one that goes unasked."
"The command line is your friend"





Francisco Reyes <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
07/27/2001 09:39
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: Admin changes in last 2 years?




On Fri, 27 Jul 2001, Cook, Dwight E wrote:

> On the failed client schedules...
> You could set a "postschedcmd" to run a script that does something like
a
> grep -i fail sched.log | mail -s $(hostname)_bkup_failures
> [EMAIL PROTECTED]

How about if the clients are not Unix? In particular NT and Mac.
Except for purchasing the MKS toolkit $$$, I have not found a grep for NT
and I have not even tried to look for a grep on the Mac, specially any pre
OS X Mac. In my opinion this really should be part of the server logs.



TDP not using diskpool

2001-07-27 Thread Gisbert Schneider

Hello,

when we do a backup via the TDP, the backup goes not first to the
disk storage pool, but directly to tape. We have tried it with and without
caching files on the disk pool. No success. Has someone an idea what
the problem might be?

An other problem is that, when we set caching to NO, the value for PctUtil
never becomes zero. Can we force it and how?

Thanks,
Gisbert



__


   Da E-Mails leicht unter fremdem Namen erstellt oder manipuliert werden
   koennen, muessen wir zu Ihrem und unserem Schutz die rechtliche
   Verbindlichkeit der vorstehenden Erklaerungen ausschliessen. Die fuer
   die Stadtsparkasse Koeln geltenden Regeln ueber die Verbindlichkeit von
   rechtsgeschaeftlichen Erklaerungen mit verpflichtendem Inhalt bleiben
   unberuehrt.



Re: TDP not using diskpool - IT IS BACKUP

2001-07-27 Thread Zlatko Krastev/ACIT

THIS IS NOT TRUE. Look at the report from TDP for Informix backup:

tsm: PUMBA.ACIT.BG>q cont /dev/rtsm_bk00

Node NameType Filespace  Client's Name for File
  Name
  --
--
PUMBA-API.ACIT.BGBkup /pumba_lo- /pumba_local/datadbs/ 0
   cal

tsm: PUMBA.ACIT.BG>q cont /dev/rtsm_bk01

Node NameType Filespace  Client's Name for File
  Name
  --
--
PUMBA-API.ACIT.BGBkup /pumba_lo- /pumba_local/rootdbs/ 0
   cal
PUMBA-API.ACIT.BGBkup /pumba_lo- /pumba_local/datadbs/ 0
   cal
PUMBA-API.ACIT.BGBkup /pumba_lo- /pumba_local/0/ 20
   cal






remember that TDP is not going to your server as a "backup" it is going as
an "archive" look at where the archives are pointed to in the management
class you've specified in your init.utl file...

Dwight



Re: Magstar 3570

2001-07-27 Thread Zlatko Krastev/ACIT

We run a nightly archive of 70 GB that is taking just over 8 hours
uncompressed.   In our Magstar 3570 we have two different tape drives.   My
question follows:   Is it possible to be running two different archives at
the same time to cut down the processing time?   If it is possible how
would
you go at it?


If you are able to allocate 50-70 GB of disk space I would recommend you to
create primary storage pool of type DISK and setup that its next storage
pool is the tape pool defined on 3570. Later set its migration thresholds
average low (for example HighMig=30 LowMig=0).
This must speed up your backup and probably you will finish faster than in
8 hours and migration processes will start for both drives moving data from
disk storage pool to tape.

Zlatko Krastev
IT Consultant



Re: Admin changes in last 2 years?

2001-07-27 Thread John Marquart

On windows you can compile grep from the gnu distro.  There is cygwin
(check i think:  http://sources.redhat.com/cygwin/download.html) which
enables you to use all of your favorite unix tools on windows.   I believe
there is also a full posix subsystem available for windows - which will
give you access to all those tools that windows lacks.

-j

On Fri, 27 Jul 2001, Francisco Reyes wrote:

> On Fri, 27 Jul 2001, Cook, Dwight E wrote:
>
> > On the failed client schedules...
> > You could set a "postschedcmd" to run a script that does something like a
> > grep -i fail sched.log | mail -s $(hostname)_bkup_failures
> > [EMAIL PROTECTED]
>
> How about if the clients are not Unix? In particular NT and Mac.
> Except for purchasing the MKS toolkit $$$, I have not found a grep for NT
> and I have not even tried to look for a grep on the Mac, specially any pre
> OS X Mac. In my opinion this really should be part of the server logs.
>


John "Jamie" Marquart   | This message posted 100% MS free.
Digital Library SysAdmin|  Work: 812-856-5174   Pager: 812-334-6018
Indiana University Libraries|  ICQ: 1131494 D'net Team:  6265



Re: Backup Scheduling

2001-07-27 Thread Zlatko Krastev/ACIT

The backup set IS exactly a snapshot. It makes a bundle of file copies and
does not track the separate files (the DB tracks them as parts of stgpools)
but tracks the entire bundle/backupset as a single object.
The backupset written to a media (tapes, CDs, etc.) can be used for "bare
metal/LAN-free" restore. So tracking of backupset gives you an idea when
you created that image. If it is recent enough you can restore the system
from media containing the backupset and later just restore newer files.
So this functions also as something similar to archive.





On Fri, 27 Jul 2001, Len Boyle wrote:

> In article <[EMAIL PROTECTED]>, Steve Harris
> <[EMAIL PROTECTED]> says:
> 2) If you need to keep a snap shot of data for a long time  without the
>need to keep the data in the tsm data look at the backupset
>function to snap a copy of the active fileset on a tape set that
>can be removed from the tsm server.


I just read the description for the backupset and I don't see how it can
be used as a snap shot. I didn't really understand from reading that what
is that the backupset is or does. In particular I don't see how it would
do what Len wrote above .. "keep a snap shot of data for a long time
without the need to keep the data in the tsm data". Does that mean that
the backupset is not tracked by the DB? It seems TSM remembers or tracks
the existance of the backupset since there is a parameter for that, but
what does that keep track of?

Thanks.



Re: TDP not using diskpool

2001-07-27 Thread Zlatko Krastev/ACIT

Check the destination storage pool in your copy group.
Are you using different node name for Backup/Archive client and for TDP ?
What is the TDP you are having problem with ?

I made backups successfully on disk with both TDP for Domino and TDP for
Informix.







Gisbert Schneider <[EMAIL PROTECTED]> on 27.07.2001 19:11:28
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:TDP not using diskpool

Hello,

when we do a backup via the TDP, the backup goes not first to the
disk storage pool, but directly to tape. We have tried it with and without
caching files on the disk pool. No success. Has someone an idea what
the problem might be?

An other problem is that, when we set caching to NO, the value for PctUtil
never becomes zero. Can we force it and how?

Thanks,
Gisbert



__


   Da E-Mails leicht unter fremdem Namen erstellt oder manipuliert werden
   koennen, muessen wir zu Ihrem und unserem Schutz die rechtliche
   Verbindlichkeit der vorstehenden Erklaerungen ausschliessen. Die fuer
   die Stadtsparkasse Koeln geltenden Regeln ueber die Verbindlichkeit von
   rechtsgeschaeftlichen Erklaerungen mit verpflichtendem Inhalt bleiben
   unberuehrt.



Re: Admin changes in last 2 years? - Image backup

2001-07-27 Thread Zlatko Krastev/ACIT

Image backup is completely different than selective backup. This is offline
backup of raw device, i.e. RDBMS table space.




Any improvements with point in time backup/restores? I see an
"imagebackup", but from reading the description doesn't make much sense to
me. It reads pretty much like the same thing as a selective backup.



Re: Teaching others TSM

2001-07-27 Thread Zlatko Krastev/ACIT

IMHO training can provide you two things - a smooth start and someone more
experienced than you (to ask questions and to give you broad overview).
At the other side nothing can replace practical experience ("The experience
is proportional to the number of broken systems" :-)

So the answer is: do both.
And this is relevant not only to TSM.



Re: Admin changes in last 2 years?

2001-07-27 Thread Cook, Dwight E

Well, but don't you want to exclude those open/active/changing files from
nightly incremental processing ?
(so they won't show up as "failed" in the sched.log)
Like with oracle blah.dbf files... we want all those excluded.
DBAs process those with either TDP or by home grown scripts that put table
spaces in backup mode, then archive the associated .dbf file(s).
Also you can adjust your "changingretries" option, generally if it is just a
user doing something, you can catch things in a retry or two.

Dwight

-Original Message-
From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 11:24 AM
To: [EMAIL PROTECTED]
Subject: Re: Admin changes in last 2 years?
Importance: Low


Hi Cook
I have a question .If the script kicks off and checks dsmsched.log for fail
entry and
 files that are open or getting updated results in  failed to backup shows
as failed schedule
but its a success schedule.

-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 11:11 AM
To: [EMAIL PROTECTED]
Subject: Re: Admin changes in last 2 years?


On the failed client schedules...
You could set a "postschedcmd" to run a script that does something like a
grep -i fail sched.log | mail -s $(hostname)_bkup_failures
[EMAIL PROTECTED]
Here we have about 500 clients, mainly  big unix servers... we have a fleet
of unix admins and we make it their job to check their  backups.  We (the
TSM server admins) just don't have the time.  We provide a single web site
with a page of all "exceptions" to the previous days backups but beyond that
"it is not our problem"

Dwight

-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 8:56 AM
To: [EMAIL PROTECTED]
Subject: Admin changes in last 2 years?


I was away from ADSM/TSM for about two years.
On the last two weeks I have been re-organizing a setup I inherited.

I am wondering what has changed from the day to day admin functionality in
the last two years I was away.

I still don't see a way to see why an scheduled client job failed. Even
though I only have a handfull of clients I don't see why this
functionality is still missing. After all there are companies out there
backing up hundreds of nodes. I don't think it is practical for those
companies to go to each failed client.

Any improvements with point in time backup/restores? I see an
"imagebackup", but from reading the description doesn't make much sense to
me. It reads pretty much like the same thing as a selective backup.



Re: Admin changes in last 2 years?

2001-07-27 Thread Richard Sims

>How about if the clients are not Unix? In particular NT and Mac.
>Except for purchasing the MKS toolkit $$$, I have not found a grep for NT
>and I have not even tried to look for a grep on the Mac...

Don't overlook the capabilities provided by implementing Perl on
your various platforms.  That gives you commonality and a lot of
flexibility.   http://www.cpan.org/ports/index.html

  Richard Sims, BU



Re: Admin changes in last 2 years?

2001-07-27 Thread Francisco Reyes

On Fri, 27 Jul 2001, Cook, Dwight E wrote:

> On the failed client schedules...
> You could set a "postschedcmd" to run a script that does something like a
> grep -i fail sched.log | mail -s $(hostname)_bkup_failures
> [EMAIL PROTECTED]

How about if the clients are not Unix? In particular NT and Mac.
Except for purchasing the MKS toolkit $$$, I have not found a grep for NT
and I have not even tried to look for a grep on the Mac, specially any pre
OS X Mac. In my opinion this really should be part of the server logs.



Re: Admin changes in last 2 years?

2001-07-27 Thread PINNI, BALANAND (SBCSI)

Here is  what I do .In Microsoft outlook u can set rules .Use these rules to
get correct mail entry into ur respective mail box.


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 11:11 AM
To: [EMAIL PROTECTED]
Subject: Re: Admin changes in last 2 years?


On the failed client schedules...
You could set a "postschedcmd" to run a script that does something like a
grep -i fail sched.log | mail -s $(hostname)_bkup_failures
[EMAIL PROTECTED]
Here we have about 500 clients, mainly  big unix servers... we have a fleet
of unix admins and we make it their job to check their  backups.  We (the
TSM server admins) just don't have the time.  We provide a single web site
with a page of all "exceptions" to the previous days backups but beyond that
"it is not our problem"

Dwight

-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 8:56 AM
To: [EMAIL PROTECTED]
Subject: Admin changes in last 2 years?


I was away from ADSM/TSM for about two years.
On the last two weeks I have been re-organizing a setup I inherited.

I am wondering what has changed from the day to day admin functionality in
the last two years I was away.

I still don't see a way to see why an scheduled client job failed. Even
though I only have a handfull of clients I don't see why this
functionality is still missing. After all there are companies out there
backing up hundreds of nodes. I don't think it is practical for those
companies to go to each failed client.

Any improvements with point in time backup/restores? I see an
"imagebackup", but from reading the description doesn't make much sense to
me. It reads pretty much like the same thing as a selective backup.



Re: Admin changes in last 2 years?

2001-07-27 Thread PINNI, BALANAND (SBCSI)

Hi Cook
I have a question .If the script kicks off and checks dsmsched.log for fail
entry and
 files that are open or getting updated results in  failed to backup shows
as failed schedule
but its a success schedule.

-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 11:11 AM
To: [EMAIL PROTECTED]
Subject: Re: Admin changes in last 2 years?


On the failed client schedules...
You could set a "postschedcmd" to run a script that does something like a
grep -i fail sched.log | mail -s $(hostname)_bkup_failures
[EMAIL PROTECTED]
Here we have about 500 clients, mainly  big unix servers... we have a fleet
of unix admins and we make it their job to check their  backups.  We (the
TSM server admins) just don't have the time.  We provide a single web site
with a page of all "exceptions" to the previous days backups but beyond that
"it is not our problem"

Dwight

-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 8:56 AM
To: [EMAIL PROTECTED]
Subject: Admin changes in last 2 years?


I was away from ADSM/TSM for about two years.
On the last two weeks I have been re-organizing a setup I inherited.

I am wondering what has changed from the day to day admin functionality in
the last two years I was away.

I still don't see a way to see why an scheduled client job failed. Even
though I only have a handfull of clients I don't see why this
functionality is still missing. After all there are companies out there
backing up hundreds of nodes. I don't think it is practical for those
companies to go to each failed client.

Any improvements with point in time backup/restores? I see an
"imagebackup", but from reading the description doesn't make much sense to
me. It reads pretty much like the same thing as a selective backup.



Re: TDP not using diskpool

2001-07-27 Thread Cook, Dwight E

remember that TDP is not going to your server as a "backup" it is going as
an "archive" look at where the archives are pointed to in the management
class you've specified in your init.utl file...

Dwight

-Original Message-
From: Gisbert Schneider [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 11:11 AM
To: [EMAIL PROTECTED]
Subject: TDP not using diskpool


Hello,

when we do a backup via the TDP, the backup goes not first to the
disk storage pool, but directly to tape. We have tried it with and without
caching files on the disk pool. No success. Has someone an idea what
the problem might be?

An other problem is that, when we set caching to NO, the value for PctUtil
never becomes zero. Can we force it and how?

Thanks,
Gisbert



__


   Da E-Mails leicht unter fremdem Namen erstellt oder manipuliert werden
   koennen, muessen wir zu Ihrem und unserem Schutz die rechtliche
   Verbindlichkeit der vorstehenden Erklaerungen ausschliessen. Die fuer
   die Stadtsparkasse Koeln geltenden Regeln ueber die Verbindlichkeit von
   rechtsgeschaeftlichen Erklaerungen mit verpflichtendem Inhalt bleiben
   unberuehrt.



Re: Admin changes in last 2 years?

2001-07-27 Thread Cook, Dwight E

On the failed client schedules...
You could set a "postschedcmd" to run a script that does something like a
grep -i fail sched.log | mail -s $(hostname)_bkup_failures
[EMAIL PROTECTED]
Here we have about 500 clients, mainly  big unix servers... we have a fleet
of unix admins and we make it their job to check their  backups.  We (the
TSM server admins) just don't have the time.  We provide a single web site
with a page of all "exceptions" to the previous days backups but beyond that
"it is not our problem"

Dwight

-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 8:56 AM
To: [EMAIL PROTECTED]
Subject: Admin changes in last 2 years?


I was away from ADSM/TSM for about two years.
On the last two weeks I have been re-organizing a setup I inherited.

I am wondering what has changed from the day to day admin functionality in
the last two years I was away.

I still don't see a way to see why an scheduled client job failed. Even
though I only have a handfull of clients I don't see why this
functionality is still missing. After all there are companies out there
backing up hundreds of nodes. I don't think it is practical for those
companies to go to each failed client.

Any improvements with point in time backup/restores? I see an
"imagebackup", but from reading the description doesn't make much sense to
me. It reads pretty much like the same thing as a selective backup.



Re: Backup Scheduling

2001-07-27 Thread Prather, Wanda

The backup set is generated by the server, but the restore backup set is
done by the client.

If you open the GUI RESTORE window, you will see that the file tree includes
RESTORE of BACKUPSETS now as well as restore local files.

If you start the client command line and say HELP, there is help available
for RESTORE BACKUPSET.

-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 10:49 AM
To: [EMAIL PROTECTED]
Subject: Re: Backup Scheduling


On Fri, 27 Jul 2001, Prather, Wanda wrote:

> The backupset has ONE entry in the DB, in volumehistory.  The backups of
> each file are not recorded in the DB, and don't participate in versioning.
> So it's like having a volume dump on the shelf, except that the backupset
is
> made from data that is already in server storage, without participation of
> the client and without sending the data across the network again.

I was looking at the "define backupset" and that confused me. I looked at
the "Generate Backupset" command definition does a decent job at
explaining the backupset.

The only thing I am still not clear is how one restores from a backupset.



Re: Teaching others TSM

2001-07-27 Thread Francisco Reyes

On Fri, 27 Jul 2001, Ford, Phillip wrote:

> In my personal experience, I like to work with something awhile and then go
> to the class.  The class makes more sense and you can get problems that you
> are struggling with worked out with the teacher.

Thanks for sharing.
I am going to start training two people next week and then 1 of them is
going to formal training in September. This way we will have a list of
things to ask to the teacher when he goes to training.



Re: AIX TSM scheduler problem

2001-07-27 Thread Richard Sims

>Richard,
>Not to split hairs here, and dangerously close to being off topic,
>but isn't sh linked to ksh?

Hi, Matt -

Yes, indeed.  In Unix parlance, "sh" is the Default Shell, or Standard Shell - a
rather non-specific reference to whatever actual shell the operating system
developers deign to employ as their standard.  In early AIX, it used to be
Bourne shell.  These days, in AIX and Solaris it is Korn shell.  As of IRIX 6.4,
sh is the Korn shell rather than the Bourne shell (and backward compatibility is
governed by the environment variable _XPG).  Things change under us - and can
change again.  That's why the inittab man page for each Unix talks of running
things under sh, not specifically mentioning ksh, so that they can alter the
underlying mechanism at will.

Aliasing a common module is a means of economizing in programming and software
proliferation by taking advantage of similar logic and processing available in
one place.  Such programs detect the name under which they were invoked and then
follow tributaries as needed in their processing, further influenced by
environment variables.

In current AIX, ksh, psh, sh, and tsh are in one module; Rsh and bsh are in
another.  Be careful to specify bsh in script headers (#!/usr/bin/bsh) when you
really want Bourne shell!

   Richard Sims, BU



Re: Compression

2001-07-27 Thread David Longo

query actlog   and search for the nodename of the client.  At the
end of the backup there is a summary stanza of how many files,
how much data etc.  Also what percent compression was achieved
for that session.

David Longo

>>> [EMAIL PROTECTED] 07/27/01 10:21AM >>>
If I have compression on in my options file, how can I findout what the
compression ratio is?

Mehdi Amini
LAN/WAN Engineer
ValueOptions
3110 Fairview Park Drive
Falls Church, VA 22042
Phone: 703-208-8754
Fax: 703-205-6879




**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the sender by email, delete and destroy this message and its
attachments.


**



"MMS " made the following
 annotations on 07/27/01 10:54:15
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Re: Backup Scheduling

2001-07-27 Thread Francisco Reyes

On Fri, 27 Jul 2001, Prather, Wanda wrote:

> The backupset has ONE entry in the DB, in volumehistory.  The backups of
> each file are not recorded in the DB, and don't participate in versioning.
> So it's like having a volume dump on the shelf, except that the backupset is
> made from data that is already in server storage, without participation of
> the client and without sending the data across the network again.

I was looking at the "define backupset" and that confused me. I looked at
the "Generate Backupset" command definition does a decent job at
explaining the backupset.

The only thing I am still not clear is how one restores from a backupset.



Re: Teaching others TSM

2001-07-27 Thread Francisco Reyes

On Fri, 27 Jul 2001, Martin, Jon R. wrote:

> Francisco,
>
> I feel I have a learned a lot with the "sink or swim" method and
> subscribing to this mail list has helped also.  I first learned about the
> clients and how they worked, then scheduling, and then moved onto things
> like the DB, log, diskpools and tapepools etc...  My opinion is that if you
> are not familiar with the material you won't get much from formal training.
> I've done ok learning it one layer at a time and reading the Administrator's
> Reference.

Thanks for sharing your experience. I think I will try a simmilar approach
to what you described. Teach the people the clients and scheduling
first and then go into the server/concepts.



Re: Compression

2001-07-27 Thread Richard Sims

>If I have compression on in my options file, how can I findout what the
>compression ratio is?

Mehdi - In the backup summary statistics there is the line:
Objects compressed by:

  Richard Sims, BU



Re: Teaching others TSM

2001-07-27 Thread Ford, Phillip

I believe the best lessons are learned during trial by fire (at least you
remember them).  Classes can help jump start a person,  but there is no way
that a person is ready to administer a system fully after a class.  One of
the biggest hurdles to overcome, for someone new to *SM who is familiar with
old styles of backups, is overcoming the monthly or weekly backup and put it
on the shelf attitude.  *SM can be beat into this mode but is not designed
for it.

In my personal experience, I like to work with something awhile and then go
to the class.  The class makes more sense and you can get problems that you
are struggling with worked out with the teacher.

Well so much for my 2 cents worth.


--
Phillip Ford
Senior Software Specialist
Corporate Computer Center
Schering-Plough Corp.
(901) 320-4462
(901) 320-4856 FAX
[EMAIL PROTECTED]



-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 10:07 AM
To: [EMAIL PROTECTED]
Subject: Teaching others TSM


What is the best way to teach someone how to use TSM?
When I learnt it some time back it took me 3 months of setting up the
server and clients and just trying different things to get all the
concepts.

When I came back to the company after 2 years I found the setup was in
fairly bad shape even though of the two people that maintained it while I
was gone one went to training.

I think hand's on is the best training, but I don't think the two people I
need to train can be made available full time just to learn TSM.

What is the general consensus about the classes? Do they truly help new
users?

***
 This message and any attachments is solely for the intended recipient. If
you are not the intended recipient, disclosure, copying, use, or
distribution of the information included in this message is prohibited --
please immediately and permanently delete this message.



Re: AIX TSM scheduler problem

2001-07-27 Thread Matthew A. Bacchi

> Be aware that inittab runs things under sh, not ksh.  See man page.

Richard,
Not to split hairs here, and dangerously close to being off topic, but 
isn't sh linked to ksh? From the man page:

man sh:
Syntax
Refer to the syntax of the ksh command. The /usr/bin/sh file is linked to the
Korn shell.
Description
The standard configuration of the operating system links the /usr/bin/sh path 
to the Korn shell.

ls -l :
-r-xr-xr-x   4 bin  bin   240390 Dec 02 1999  /usr/bin/sh
-r-xr-xr-x   4 bin  bin   240390 Dec 02 1999  /usr/bin/ksh

that is on a AIX 4.3.3 system at least.  Maybe if your system is configured 
differently you would have a different sh binary.

-Matt


/**
 **Matt Bacchi   [EMAIL PROTECTED]
 **IBM Global ServicesSDC Northeast
 **F6TG; MD Filesystems/Internet (802) 769-4072
 **ADSM & AFS/DFS Backup (tie) 446-4072
 **/



Re: Teaching others TSM

2001-07-27 Thread Martin, Jon R.

Francisco,

I have been administering TSM for three months now.  The previous
administrator is my manager so I have had someone on hand to answer my
questions.  I feel I have a learned a lot with the "sink or swim" method and
subscribing to this mail list has helped also.  I first learned about the
clients and how they worked, then scheduling, and then moved onto things
like the DB, log, diskpools and tapepools etc...  My opinion is that if you
are not familiar with the material you won't get much from formal training.
I've done ok learning it one layer at a time and reading the Administrator's
Reference.

Thanks,
Jon Martin



-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 10:07 AM
To: [EMAIL PROTECTED]
Subject: Teaching others TSM


What is the best way to teach someone how to use TSM?
When I learnt it some time back it took me 3 months of setting up the
server and clients and just trying different things to get all the
concepts.

When I came back to the company after 2 years I found the setup was in
fairly bad shape even though of the two people that maintained it while I
was gone one went to training.

I think hand's on is the best training, but I don't think the two people I
need to train can be made available full time just to learn TSM.

What is the general consensus about the classes? Do they truly help new
users?



Compression

2001-07-27 Thread Amini, Mehdi

If I have compression on in my options file, how can I findout what the
compression ratio is?

Mehdi Amini
LAN/WAN Engineer
ValueOptions
3110 Fairview Park Drive
Falls Church, VA 22042
Phone: 703-208-8754
Fax: 703-205-6879




**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the sender by email, delete and destroy this message and its
attachments.


**



Re: Backup Scheduling

2001-07-27 Thread Prather, Wanda

The backupset has ONE entry in the DB, in volumehistory.  The backups of
each file are not recorded in the DB, and don't participate in versioning.
So it's like having a volume dump on the shelf, except that the backupset is
made from data that is already in server storage, without participation of
the client and without sending the data across the network again.

-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 10:01 AM
To: [EMAIL PROTECTED]
Subject: Re: Backup Scheduling


On Fri, 27 Jul 2001, Len Boyle wrote:

> In article <[EMAIL PROTECTED]>, Steve Harris
> <[EMAIL PROTECTED]> says:
> 2) If you need to keep a snap shot of data for a long time  without the
>need to keep the data in the tsm data look at the backupset
>function to snap a copy of the active fileset on a tape set that
>can be removed from the tsm server.


I just read the description for the backupset and I don't see how it can
be used as a snap shot. I didn't really understand from reading that what
is that the backupset is or does. In particular I don't see how it would
do what Len wrote above .. "keep a snap shot of data for a long time
without the need to keep the data in the tsm data". Does that mean that
the backupset is not tracked by the DB? It seems TSM remembers or tracks
the existance of the backupset since there is a parameter for that, but
what does that keep track of?

Thanks.



Re: Admin changes in last 2 years?

2001-07-27 Thread Prather, Wanda

I don't think an imagebackup is the same thing as a selective.
I think it's more like a file system dump - it takes the whole filespace, by
track or sector, without making individual DB entries for each backed up
objectfile.  So it's faster to dump, and much faster to restore.


-Original Message-
From: Francisco Reyes [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 9:56 AM
To: [EMAIL PROTECTED]
Subject: Admin changes in last 2 years?


I was away from ADSM/TSM for about two years.
On the last two weeks I have been re-organizing a setup I inherited.

I am wondering what has changed from the day to day admin functionality in
the last two years I was away.

I still don't see a way to see why an scheduled client job failed. Even
though I only have a handfull of clients I don't see why this
functionality is still missing. After all there are companies out there
backing up hundreds of nodes. I don't think it is practical for those
companies to go to each failed client.

Any improvements with point in time backup/restores? I see an
"imagebackup", but from reading the description doesn't make much sense to
me. It reads pretty much like the same thing as a selective backup.



Admin changes in last 2 years?

2001-07-27 Thread Francisco Reyes

I was away from ADSM/TSM for about two years.
On the last two weeks I have been re-organizing a setup I inherited.

I am wondering what has changed from the day to day admin functionality in
the last two years I was away.

I still don't see a way to see why an scheduled client job failed. Even
though I only have a handfull of clients I don't see why this
functionality is still missing. After all there are companies out there
backing up hundreds of nodes. I don't think it is practical for those
companies to go to each failed client.

Any improvements with point in time backup/restores? I see an
"imagebackup", but from reading the description doesn't make much sense to
me. It reads pretty much like the same thing as a selective backup.



Teaching others TSM

2001-07-27 Thread Francisco Reyes

What is the best way to teach someone how to use TSM?
When I learnt it some time back it took me 3 months of setting up the
server and clients and just trying different things to get all the
concepts.

When I came back to the company after 2 years I found the setup was in
fairly bad shape even though of the two people that maintained it while I
was gone one went to training.

I think hand's on is the best training, but I don't think the two people I
need to train can be made available full time just to learn TSM.

What is the general consensus about the classes? Do they truly help new
users?



Re: Backup Scheduling

2001-07-27 Thread Francisco Reyes

On Fri, 27 Jul 2001, Len Boyle wrote:

> In article <[EMAIL PROTECTED]>, Steve Harris
> <[EMAIL PROTECTED]> says:
> 2) If you need to keep a snap shot of data for a long time  without the
>need to keep the data in the tsm data look at the backupset
>function to snap a copy of the active fileset on a tape set that
>can be removed from the tsm server.


I just read the description for the backupset and I don't see how it can
be used as a snap shot. I didn't really understand from reading that what
is that the backupset is or does. In particular I don't see how it would
do what Len wrote above .. "keep a snap shot of data for a long time
without the need to keep the data in the tsm data". Does that mean that
the backupset is not tracked by the DB? It seems TSM remembers or tracks
the existance of the backupset since there is a parameter for that, but
what does that keep track of?

Thanks.



Re: Backup Scheduling

2001-07-27 Thread Len Boyle

In article <[EMAIL PROTECTED]>, Steve Harris
<[EMAIL PROTECTED]> says:
>
>Whoops!!
>
>Looks like I got it backwards.  The example in the help under define =
>copygroup is a little confusing and could be interpreted either way (I did =
>check before I posted)
>
>One reason to do a full backup is to consolidate all active files onto as =
>small a set of media as possible in order to facilitate a quick restore.
>
>Steve.=20
Hello Steve

For this function you might want to look at two functions.
1) Since in it's normal operation tsm does inc backups only, it
   normally use much less tape real estate then full/inc backup schemes.
   To enable the data for restores to be on just a few tapes, look at
   the collacate settings for the tape storage pool and the reclaim
   function to merge sparce tapes into full tapes.
   We use this  feature and it does make a difference.

2) If you need to keep a snap shot of data for a long time  without the
   need to keep the data in the tsm data look at the backupset
   function to snap a copy of the active fileset on a tape set that
   can be removed from the tsm server.

len

-
Leonard Boyle   [EMAIL PROTECTED]
SAS Institute Inc.  ussas4hs@ibmmail
Room RB448  [EMAIL PROTECTED]
1 SAS Campus Drive  (919) 531-6241
Cary NC 27513



Re: Magstar 3570

2001-07-27 Thread Miles Purdy

I wrote this little script to do exactly what you want. I have to admit that 
performance does not increase linearly with the number of streams. 

Anyway, this script will run N archives concurrently. It finds all the files in a 
subdirectory, and sorts them by size. It then makes N archive scripts and runs them 
all together.

Change $MC (management class), and $NUM_STREAMS (this can be number of tapes drive +1).

miles

#!/usr/bin/ksh
#
# Name: Fast Archive 2
#
# Purpose:
#
# Author: Miles Purdy
#
# Date:
#
# Required Parameters:
# -
#
# Optional Parameters:
# -
#
# Change History:
#
# Date  NameComments
# _
#

# set umask to prevent others from reading tmp files
umask 077

DIRECTORY=$1

RETCODE=0
MC=1day
NUM_STREAMS=7

echo $0 started at $(date).

WORK_DIR=${WORK_DIR:=/tmp}
SCR_DIR=$UNIX_SYSTEM_DIRECTORY/bin

if [[ `hostname` != 'unxr' ]]
then
   echo "This scripts should be run from unxr."
   exit
fi

COUNT=1
/usr/bin/find $DIRECTORY -fstype jfs -type f -xdev -ls | sort -bnr +6.0 -7.0 \
 | sed "s/^.* \//\//" | while read LINE
do

   if [[ ! -s /tmp/file_list.$$.$COUNT ]]
   then
  echo "#!/usr/bin/ksh"  > /tmp/file_list.$$.$COUNT
  echo ""   >> /tmp/file_list.$$.$COUNT
  chmod +x /tmp/file_list.$$.$COUNT

   fi

   echo "$DSM_DIR/dsmc archive -archmc=$MC $LINE" >>/tmp/file_list.$$.$COUNT

   let "COUNT = COUNT + 1"
   if [[ $COUNT -eq $NUM_STREAMS ]]
   then
  COUNT=1
   fi

done


COUNT=1
while [[ $COUNT -lt $NUM_STREAMS ]]
do
   /tmp/file_list.$$.$COUNT 1>/tmp/$$.$COUNT 2>&1 &
   let "COUNT = COUNT + 1"
done

wait

COUNT=1
while [[ $COUNT -lt $NUM_STREAMS ]]
do
   cat /tmp/$$.$COUNT
   let "COUNT = COUNT + 1"
done

echo "==="
COUNT=1
while [[ $COUNT -lt $NUM_STREAMS ]]
do
   echo "Stream $COUNT:"
   cat /tmp/$$.$COUNT | grep -i "Total number of bytes transferred"
   cat /tmp/$$.$COUNT | grep -i "Elapsed processing time"
   cat /tmp/$$.$COUNT | grep -i "Aggregate data transfer rate"
   echo "---"
   let "COUNT = COUNT + 1"
done

# exit
echo $0 ended at $(date).
exit $RETCODE




---
Miles Purdy 
System Manager
Farm Income Programs Directorate
Winnipeg, MB, CA
[EMAIL PROTECTED]
ph: (204) 984-1602 fax: (204) 983-7557
---

>>> [EMAIL PROTECTED] 26-Jul-01 2:12:29 PM >>>
Hello *SMers,

Here is a situation that we are trying to look at to hopefully cut
down our archive time.  I was wondering if any of you have an idea if it
would be possible.

Currently we are running TSM 4.1.2.0 on an AIX 4.3.3.0

We run a nightly archive of 70 GB that is taking just over 8 hours
uncompressed.   In our Magstar 3570 we have two different tape drives.   My
question follows:   Is it possible to be running two different archives at
the same time to cut down the processing time?   If it is possible how would
you go at it?   I know I see allot of *SMers talking about backup clients at
the same time, but the information we back up comes off of one RS/6000 AIX
server.Is it possible to be running two sessions off the same machine at
the same time?


Any help would be grateful.

Thanks in Advance,

Bill Wheeler
AIX Administrator
La-Z-Boy Incorporated
[EMAIL PROTECTED] 



Novell 4.1.3 Client

2001-07-27 Thread Bruce Kamp

Installing the 4.1.3 client on my Novell servers.  What I need to know is if
I still need to load the scheduler through DSMCAD or can I go back to using
DSMC SCHED?  There is no mention in the readme about this...

Thanks,
---
Bruce Kamp
Network Analyst II
Memorial Healthcare System
P:(954) 987-2020 x4542
Fax: (954) 985-1404
[EMAIL PROTECTED]




Re: server to server

2001-07-27 Thread Steven P Roder

Paul,

 Thanks so much!  that was it!


> Hello Steven,
>
> what about the label prefix in the device classes. It may be that they
have
> to be the same in both old source server and new source server device
> classes. See APAR IC26603.
>
> Mit freundlichen Gren - With best regards
> Serdeczne pozdrowienia - Slan agus beannacht
> Paul Baines
> TSM/ADSM Consultant

On Thu, 26 Jul 2001, Steven P Roder wrote:

> Hi All,
>
>  I am having a problem with a server-to-server setup.  I am able to
> export a client, and it create a virtual volumes on the source server,
> with the data going over to the target, but with I start the import on the
> target, I get error positioning the volume.  Both servers are TSM 4.1.2.0
> under AIX 4.3.3.  Has anyone seen this before?  Here are the logs:
>
> export:
>
> 07/26/01 14:55:38 ANR2017I Administrator RODER issued command: EXPORT NODE
> tkssteve.cc.BUFFALO.-
> 07/26/01 14:55:38 EDU filespace=* do=* filedata=all preview=no
> devclass=2luke scratch=yes
> 07/26/01 14:55:38 ANR0984I Process 30 for EXPORT NODE started in the
> BACKGROUND at 14:55:38.
> 07/26/01 14:55:38 ANR0402I Session 1019 started for administrator RODER
> (Server) (Memory IPC).
> 07/26/01 14:55:38 ANR8340I SERVER volume 2LUKE.EXP.996173738 mounted.
> 07/26/01 14:55:38 ANR1360I Output volume 2LUKE.EXP.996173738 opened
> (sequence number 1).
> 07/26/01 14:55:38 ANR0610I EXPORT NODE started by RODER as process 30.
> 07/26/01 14:55:39 ANR0635I EXPORT NODE: Processing node
> TKSSTEVE.CC.BUFFALO.EDU in domain
> 07/26/01 14:55:39 UBMASTER.
> 07/26/01 14:55:39 ANR0637I EXPORT NODE: Processing file space
> \oemcomputerc$ for node
> 07/26/01 14:55:39 TKSSTEVE.CC.BUFFALO.EDU.
>
> q pr on source shows:
>
>  30 EXPORT NODE  ANR0648I Have copied the following: 1
> Nodes  1
>Filespaces  264 Backup Files  98212
> Bytes  (0
>errors have been detected).
>
>Current output volume:
> 2LUKE.EXP.996173738.
>
> q sesssion on target:
>
>  2,725   Tcp/Ip   MediaW58 S  3.1 K   506   NodeAIX-RS/-
> ADSM
>
> export completes:
>
> 07/26/01 15:17:19 ANR1361I Output volume 2LUKE.EXP.996173738 closed.
> 07/26/01 15:17:19 ANR0617I EXPORT NODE: Processing completed with status
> SUCCESS.
> 07/26/01 15:17:19 ANR0626I EXPORT NODE: Copied 1 node definitions.
> 07/26/01 15:17:19 ANR0627I EXPORT NODE: Copied 1 file space 6 archive
> files, 41323 backup files,
> 07/26/01 15:17:19 and 0 space managed files.
> 07/26/01 15:17:19 ANR0629I EXPORT NODE: Copied 3347646298 bytes of data.
> 07/26/01 15:17:19 ANR0611I EXPORT NODE started by RODER as process 30 has
> ended.
> 07/26/01 15:17:19 ANR4006I EXPORT NODE: Volume 1 written by process is
> 2LUKE.EXP.996173738.
> 07/26/01 15:17:19 ANR0986I Process 30 for EXPORT NODE running in the
> BACKGROUND processed 41331
> 07/26/01 15:17:19 items for a total of 3,347,646,298 bytes with a
> completion state of SUCCESS at
> 07/26/01 15:17:19 15:17:19.
>
>
> ..a tape is mounted, and the data is written.  Then, I go to the target
> server, and issue the import command:
>
> 07/26/01 15:25:13 ANR2017I Administrator RODER issued command: IMPORT NODE
> tkssteve.cc.BUFFALO.-
> 07/26/01 15:25:13 EDU filespace=* do=ubmaster filedata=all devclass=2luke
> preview=no dates=abso-
> 07/26/01 15:25:13 lute vol=2LUKE.EXP.996173738
> 07/26/01 15:25:13 ANR0984I Process 440 for IMPORT NODE started in the
> BACKGROUND at 15:25:13.
> 07/26/01 15:25:13 ANR0402I Session 2740 started for administrator RODER
> (Server) (Memory IPC).
> 07/26/01 15:25:13 ANR0406I Session 2742 started for node ADSM
> (AIX-RS/6000) (Tcp/Ip 128.205.7.8-
> 07/26/01 15:25:13 0(56589)).
> 07/26/01 15:25:13 ANR8340I SERVER volume 2LUKE.EXP.996173738 mounted.
> 07/26/01 15:25:13 ANRD pvrserv.c(650): Error positioning SERVER volume
> 2LUKE.EXP.996173738 to
> 07/26/01 15:25:13 1:0.
> 07/26/01 15:25:13 ANRD xibf.c(902): Error 5 encountered in opening
> import stream.
> 07/26/01 15:25:13 ANR0661E IMPORT NODE: Internal error encountered in
> accessing data storage.
> 07/26/01 15:25:13 ANR0985I Process 440 for IMPORT NODE running in the
> BACKGROUND completed with
> 07/26/01 15:25:13 completion state FAILURE at 15:25:13.
> 07/26/01 15:25:13 ANR0568W Session 2740 for admin RODER (Server)
> terminated - connection with
> 07/26/01 15:25:13 client severed.
>
> If I do a q mount, the virtual volumes is mounted:
>
> tsm: ADSM_AIX>q mount
> ANR8332I SERVER volume 2LUKE.EXP.996173738 is mounted R/O, status: IDLE.
>
> Has anyone seen this error before, and know it's cause?  I have tried
> multiple exports, both with the data going to disk, and tape on the target
> server, to no avail.  Is this a bug in 4.1.2.0?  I did this sucsessfully
> from a Solaris 3.7.3 to AIX 4.1.2.0.
>
> Thanks for any help.
>
> Steve Roder, University at Buffalo
> HOD Service Coordinator
> VM Systems Programmer
> UNIX Systems Administrator

Re: ANR7804I mystery

2001-07-27 Thread Richard Sims

>After I got all this fixed up, I rebooted and went to start TSM manually
>because the startup script was failing.  I had no filesystem errors on that
>boot as far as I can remember, and I was watching carefully.  On running
>./dsmserv, I got this error:
>
>ANR7804I Unable to lock dsmserv.lock in this directory. - Resource
>temporarily unavailable
...
>I looked in the messages manual, and found some information specific to AIX
>and HP-UX under ANR7804, but that was do with having a server already
>running, which was not the case.

Lesley - You don't specify, but I would assume that your system boot initiation
 files invoke the TSM server, which would account for the prior
instance.  Though that initial instance was either delayed or non-functional
(you could not enter into an administrative session) the process itself would be
there.  In dropping the operating system to single user mode that should have
disposed of the stifled initial process.

Every time I have encountered that message (thankfully, few), there has been
a prior server process in existence, which had to be dealt with.

  Richard Sims, BU



Re: Retrieving Associated Volumes from Off-Site

2001-07-27 Thread Zlatko Krastev/ACIT

I am just a beginner but what I read in the docs was that you have to do it
in REVERSE order!!!
Have a look at RedBook SG24-4877 "Tivoli Storage Management Concepts" (it
can be downloaded from http://www.redbooks.ibm.com).

At page 88 (p.110 in the PDF) you can read:
"   5.4.2 Reclamation of offsite volumes
Care must be taken in reclaiming offsite copy storage volumes. Tivoli
Storage Manager cannot physically move the data from one of these volumes
to another, because they are in a vault, not available in the library.
Tivoli Storage Manager manages reclamation for an offsite copy pool by
obtaining the active files from a primary storage pool or from an onsite
volume of a copy pool. These files are then written to a new volume in the
copy pool, and the database is updated. A message is then issued that an
offsite volume was reclaimed. The new volume will get moved to the offsite
location, and the offsite volumes, whose active data it has combined, will
be moved back to the scratch pool onsite, when the reuse delay parameter
has been satisfied. "

Thus first you have to reclaim volumes from primary pools data. Your copy
data will not be lost and all volumes (look below) will be reclaimed.
You will get the list of emptied volumes which on the phase two can be
retrieved back on-site. This must be done just after you are sure that new
copy volume(s) arrived off-site. Coming back you can take with you the
listed free/scratch volumes.

Remark: If an reclaimable volume contains just part of a file and the rest
is on another volume (which is your associated one). The file would be
reclaimed from both on a third volume (new copy vol) and now the second
volume usage can also drop below 5% (it will become 95% reclaimable). And
you will have another volume for reclamation.

Greetings
Zlatko Krastev
IT Consultant





Ruth Robertson <[EMAIL PROTECTED]> on 26.07.2001 22:08:47
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Retrieving Associated Volumes from Off-Site

My copy storage pool volumes go off-site.  Once a week, I start reclamation
for the off-site volumes: I list out volumes where percent reclaimable
space
>= 95, I retrieve those volumes from off-site, check them in, and set
reclamation to 95.  Works great...except for a few volumes that complain
about an associated volume before they'll be reclaimed.  The associated
volumes don't show up in the reclamation list, and are therefore not
retrieved from off-site.  How do I get the list of volumes associated with
those in the reclamation list?
Since I only receive the ADSM-L Digest, please email me directly.
Thanks!
Ruth Robertson, Unix Administrator
[EMAIL PROTECTED]



Re: Backup Scheduling

2001-07-27 Thread Andrew Raibeck

Yeah, I goofed too! There is no MODE setting of CHANGEDONLY; the available
MODE settings are MODIFIED and ABSOLUTE.

Regards,

Andy

Andy Raibeck
IBM Tivoli Systems
Tivoli Storage Manager Client Development
e-mail: [EMAIL PROTECTED]
"The only dumb question is the one that goes unasked."
"The command line is your friend"





Steve Harris <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
07/26/2001 21:35
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: Backup Scheduling




Whoops!!

Looks like I got it backwards.  The example in the help under define
copygroup is a little confusing and could be interpreted either way (I did
check before I posted)

One reason to do a full backup is to consolidate all active files onto as
small a set of media as possible in order to facilitate a quick restore.

Steve.

>>> Andrew Raibeck <[EMAIL PROTECTED]> 27/07/2001 10:51:24 >>>
Steve,

I'm not sure I follow the FREQUENCY recommendation.

FREQUENCY indicates the minimum number of days that must pass before the
file is eligible for backup, regardless of whether it has changed. So if
FREQUENCY is set to 30, then the file will not be backed up until at least
30 days have transpired since the last time it was backed up, even if it
changes every day. If the file does not change at all, then it will not be
backed up via incremental backup, regardless of FREQUENCY.

The easiest way to get a full backup would be to set the management
class(es) MODE parameter to ABSOLUTE (normally it is set to CHANGEDONLY,
which causes normal incremental backup behavior). The MODE=ABSOLUTE says
to back up the file regardless of whether it has changed. (Note that this
does not change how EXCLUDE statements work; if the file is EXCLUDEd, it
will not be backed up at all).

Assuming that the only reason for running weekly or monthly "full" backups
is for the sake of getting a full backup... well, I'm not sure there's
much point to doing that. But if weekly or monthly backups are required to
be kept for a longer period of time, then this would (most likely) be a
task better suited to ARCHIVE or GENERATE BACKUPSET.

Regards,

Andy

Andy Raibeck
IBM Tivoli Systems
Tivoli Storage Manager Client Development
e-mail: [EMAIL PROTECTED]
"The only dumb question is the one that goes unasked."
"The command line is your friend"

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:
Subject:Re: Backup Scheduling



Randall,

You can get a full backup by running a "selective" rather than
"incremental", however there is another way if you are willing to
sacrifice the exact day on which a full backup is done.

Specify a FREQUENCY parameter of say 30 days in the copygroup for these
files. Then run your daily incrementals. Any file that changes will be
backed up, and any file that hasn't been backed up in 30 days will also be
backed up again.  Provided you stagger your implementation, this should
prove as effective as a monthly full.

There is a caveat in that if you manage retention by number of versions
rather than by retention period, each selective or backup due to frequency
will push the oldest retained version off the list and it will no longer
be available. It will probably make no difference, but you should be
aware.

Regards

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia



>>> Randal L Riedinger <[EMAIL PROTECTED]> 27/07/2001
3:41:19 >>>
I'm new to TSM and I am setting up a system for the first time.  We
have around 4,000 desk top systems and 200 servers.  Most of the
desk top systems will have weekly backups of a selected directory tree.
The
rest will have monthly full backups and daily incrementals. We have
10/100 Ethernet to most of the systems and FDDI to some of the servers.
We currently have a 100 bps FDDI backbone linking the routers but we are
upgrading to 1GB Ethernet.  The TSM server has 1GB Ethernet off of
the new backbone.  We can do the small full backups on week nights and
the large ones over the week end.  The weekly backups will need to
be staggered through out the month.  If a full backup fails it needs to
be rescheduled for some number of retries with notification of the
failures.
How do I do this?  My next question is how do I set up the schedules?  I
can
visualize defining 7 weekly schedules for each day of the week and 31
monthly
schedules for each day of the month and associating the nodes to them.  Is
there
a better way to do this?  What do you have to do to get a full backup
verses an
incremental?



NT drive mappings

2001-07-27 Thread Thomas, Matthew

We've got two TSM 4.1 servers running on NT4 testing boxes which are
SAN-connected to a 3494 tape library.
Intermittently the drive definitions for the servers get screwed up when the
boxes are rebooted and Windows merrily re-assigns the devices (for example
mt1.0.0.6 becomes mt1.0.0.4). Trawling through the registry gives the new
device info so we can update the drives in TSM as necessary so that it
matches up, however this is a bit of a long winded process which won't be
acceptable to our support staff when the servers go into 'live' status.

My question is:
Is there a way of forcing NT to stick with the same devices when the server
is rebooted, and if not, is there a more elegant way of grabbing the device
details (since the frontline support guys probably won't be allowed to go
hacking around in the registry) and perhaps dropping them into a script to
update the drive definitions in TSM?

Many thanks,
Matt Thomas
Storage Support


---
This e-mail is intended only for the above addressee.  It may contain
privileged information. If you are not the addressee you must not copy,
distribute, disclose or use any of the information in it.  If you have
received it in error please delete it and immediately notify the sender.

evolvebank.com is a division of Lloyds TSB Bank plc.
Lloyds TSB Bank plc, 71 Lombard Street, London EC3P 3BS.  Registered in
England, number 2065.  Telephone No: 020 7626 1500
Lloyds TSB Scotland plc, Henry Duncan House, 120 George Street,
Edinburgh EH2 4LH.  Registered in Scotland, number 95237.  Telephone
No: 0131 225 4555

Lloyds TSB Bank plc and Lloyds TSB Scotland plc are regulated by the
Personal Investment Authority and represent only the Scottish Widows
and Lloyds TSB Marketing Group for life assurance, pensions and
investment business.

Members of the UK Banking Ombudsman Scheme and signatories to the UK
Banking Code.
---



Re: Tape volume list ?

2001-07-27 Thread Stan Vernaillen

Robin,

Thanks for you precise answer.

now a bit offtopic maybe...regarding your perl script.
I've put it in the root of the webserver, but when I point my browser at it, it
just displays the contents of your script as a text file, it does not execute
it.
OS is AIX, perl is 5.6.0
Is this me , or can I blame something else?

Stan




Robin Sharpe <[EMAIL PROTECTED]> on 26/07/2001 16:40:53

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:(bcc: Stan Vernaillen/BE/CCE)
Subject:  Re: Tape volume list ?



>The thing is that i'm not interested in the Storage pool volumes or
volumes
>in the library,...
>I want to know ALL volumes.
>Also the DB backups, the scratch tapes in the desk of the Media team, ...
>Basicly every tape that has ever been labeled on that server.

>How else do you know what label to give to a new tape ?
>I know I've labeled tape ec0001 through eC0100, but if a colleague wants
to
>label ten more, how does he know they have to be ec0101 to ec0110 if q
>vol,.. shows the highest volume in use, or the library to be ec0071 ?

Stan,

Unfortunately, you've uncovered one the, u, "clumsy" parts of TSM.  The
way TSM manages scratch tapes, that is, tapes that are now empty and ready
to be re-used, is to not manage them at all!  What TSM does when a tape
becoms empty and is brought back onsite, is delete it from the volumes
table.  It will not be known to TSM again until it is checked into a
library as a scratch, and then it will be in the libvolumes table.
However, the volhistory table (do a "q volhist", the output may be large)
does keep a record of when the tape was used and deleted.  If the tape has
been re-used five times, there should be ten records in volhistory -- a
"STGNEW" and a "STGDELETE" for each use.  This table is also where TSM DB
backups and backupsets are managed (they do not belong to any storage pool,
and are not in the volumes table).

You also said you need to track the tapes in the drawer of your media
group... TSM cannot do that, you'll have to do it manually.  Those tapes
are the ones that were deleted when they came back from the vault, and will
not be seen again until they get checked in.  Ideally, in TSM, you should
check ALL tapes back into the library as soon as possible (maybe keep a
couple on hand in case you need to do a quick backup and have no
scratches).  This presumes of course that your library is large enough to
hold all of your tapes (ours isn't, so we're in process of upgrading to a
larger library).

As for knowing what label to allocate next, you'll have to manually track
that also.  As Matt said, I think most of us use pre-printed barcode
labels, and that is basically the "volser number tracking system".

Sorry I can't give any better advice... we've been down the same road,
trying to locate missing tapes.  They are a fact of life, I guess.  That's
what originally prompted me to write the qtapet Perl script.

Robin Sharpe
Berlex Laboratories



AW: Why am I rejected ??

2001-07-27 Thread Roland Bender

Maybe your client is locked in the ADSM Admin Client. Or your client is
unknown to ADSM.

Roland

> -Ursprüngliche Nachricht-
> Von:  Johan Pol [SMTP:[EMAIL PROTECTED]]
> Gesendet am:  Freitag, 27. Juli 2001 11:26
> An:   [EMAIL PROTECTED]
> Betreff:  Why am I rejected ??
> 
> Why am I rejected ??
> 
> with kind regards, / met vriendelijke groet,
> Johan Pol
> Boerhaavelaan 11 - 2713 HA - Zoetermeer
> Tel. : (+31)79 3223051
> Fax. : (+31)79 3213989
> Mobile : (+31)6 51384705
> Internet : [EMAIL PROTECTED]



Re: Include/Exclude needed for Mac?

2001-07-27 Thread Ian Smith

Wanda,

Here is our default list that we install on all Mac clients.
It's almost certainly not exhaustive and I'd be interested
to have any comments and/or futher additions to the exclude list.
A point of note - the weird Ä character is the ASCii representation
of CHR(196) and on the mac looks like a leant-over f ( I'm told it's
the florint character ). 
I too am no Mac guru but am told that a wildcarded exclusion of all 
Caceh folders is not advisable as some Cache folders on the Mac hold
important app/user data.

Exclude "...:Desktop DB"
Exclude "...:Desktop DF"
Exclude "...:Desktop"
Exclude "...:Trash:...:*"
Exclude "...:Wastebasket:...:*"
Exclude "...:VM Storage"
Exclude "...:Norton FileSaver Data"
Exclude "...:Norton VolumeSaver Data"
Exclude "...:Norton VolumeSaver Index"
Exclude.dir"...:System Folder:Preferences:cache-cache"
Exclude.dir"...:System Folder:Preferences:Netscape Users:...:Cache 
Ä"
Exclude.dir"...:System Folder:Preferences:Netscape Ä:Cache Ä"
Exclude.dir"...:System Folder:Preferences:Explorer:Temporary Files"
Exclude "...:Temporary Items:...:*"
Exclude "...:...:TheFindByContentIndex"
Exclude "...:aaa?*"
Exclude "...:...:TSM Sched*"
Exclude "...:...:TSM Error*"

HTH
Ian Smith

Ian Smith - HFS Backup / Archive Services   
Oxford University Computing Services








>MIME-Version: 1.0
>Date: Thu, 26 Jul 2001 17:31:34 -0400
>From: "Prather, Wanda" <[EMAIL PROTECTED]>
>Subject: Include/Exclude needed for Mac?
>To: [EMAIL PROTECTED]
>
>I am NOT Mac literate -
>Would someone kindly share with me their INCLUDE/EXCLUDE list for Mac
>clients?
>
>There is a small set in the "Using the Mac Clients" book, but it looks to me
>like there are other directories that can be exluded, like Netscape cache?
>or the Spool Folder? Temporary Items?
>
>Thanks!



Re: ANR0102E dsalloc.c(952): Error 1 inserting row ...

2001-07-27 Thread Suad Musovich

Looks like you have a database inconsistency.

I had a similar problem last year and ended cleaning it up with an AUDITDB.

Check with Tivoli support to confirm this is the case (didn't have sessions
die on me so it might be a newbie)

Suad
--

On Fri, Jul 27, 2001 at 10:55:06AM +0200, Reinhard Mersch wrote:
> Hello,
>
> last night my server (4.1.2.0 on AIX 4.3.3) showed a lot of errors:
> ANR0102E dsalloc.c(952): Error 1 inserting row in table
>  "DS.Segments".
>
> They were accompanied by some dying client sessions:
> ANR0530W Transaction failed for session 83419 for node
>  X (AIX) - internal server error detected.
>
> Anybody seen this?
>
> --
> Reinhard MerschWestfaelische Wilhelms-Universitaet
> Zentrum fuer Informationsverarbeitung - ehemals Universitaetsrechenzentrum
> Roentgenstrasse 9-13, D-48149 Muenster, Germany  Tel: +49(251)83-31583
> E-Mail: [EMAIL PROTECTED]   Fax: +49(251)83-31653



Why am I rejected ??

2001-07-27 Thread Johan Pol

Why am I rejected ??

with kind regards, / met vriendelijke groet,
Johan Pol
Boerhaavelaan 11 - 2713 HA - Zoetermeer
Tel. : (+31)79 3223051
Fax. : (+31)79 3213989
Mobile : (+31)6 51384705
Internet : [EMAIL PROTECTED]



ANR0102E dsalloc.c(952): Error 1 inserting row ...

2001-07-27 Thread Reinhard Mersch

Hello,

last night my server (4.1.2.0 on AIX 4.3.3) showed a lot of errors:
ANR0102E dsalloc.c(952): Error 1 inserting row in table
 "DS.Segments".

They were accompanied by some dying client sessions:
ANR0530W Transaction failed for session 83419 for node
 X (AIX) - internal server error detected.

Anybody seen this?

--
Reinhard MerschWestfaelische Wilhelms-Universitaet
Zentrum fuer Informationsverarbeitung - ehemals Universitaetsrechenzentrum
Roentgenstrasse 9-13, D-48149 Muenster, Germany  Tel: +49(251)83-31583
E-Mail: [EMAIL PROTECTED]   Fax: +49(251)83-31653



ANR7804I mystery

2001-07-27 Thread Walker, Lesley R

TSM 3.7.3 on Solaris, with Veritas Volume Manager.

First I had some nasty filesystem problems, which were caused by somebody
(not me!) installing some software which panicked and rebooted several
times.  I have a mirrored rootdisk, and the volumes got out of sync and
/etc/vfstab had the wrong information in it (and that's another question,
but I won't ask it here).

After I got all this fixed up, I rebooted and went to start TSM manually
because the startup script was failing.  I had no filesystem errors on that
boot as far as I can remember, and I was watching carefully.  On running
./dsmserv, I got this error:

ANR7804I Unable to lock dsmserv.lock in this directory. - Resource
temporarily unavailable

I went to my other TSM servers to look up the error, and got
"No help text could be found for this message".

I looked in the messages manual, and found some information specific to AIX
and HP-UX under ANR7804, but that was do with having a server already
running, which was not the case.

I looked for dsmserv.lock on my test lab server, couldn't find it anywhere,
either with the server up or down.

I then thought, Doh! I probably have to restore the database, I'll do that
on Monday.

Then for no particularly good reason, I took the box down to single user
mode and then back up again.  I didn't do anything else, and I certainly
didn't restore the database.

This time the startup script worked, and TSM is running happily.  Querying
the actlog shows a normal startup, no recovery activity or anything.

Anyone have any idea what REALLY happened when I got that ANR7804I?


--
Lesley Walker
Unix Engineering, EDS New Zealand
[EMAIL PROTECTED]
"I feel that there is a world market for as many as five computers"
Thomas Watson, IBM corp. - 1943