Re: Client for Linux

2005-07-14 Thread Andrew Ferris
Hi Yiannakis,

That sounds like a bunch of TSM Server rpms. The easiest way to grab a
Linux client, bearing in mind that SLES and RHEL are only officially
supported, is to hit the storage FTP site.

ftp://service.boulder.ibm.com/storage/tivoli-storage-management/

Then the path would be either /maintenance/client/[VersionRelease] or
/patches/client/[VersionRelease]

Andrew Ferris
Network Support Analyst
iCAPTURE Research Centre
University of British Columbia
>>> [EMAIL PROTECTED] 07/14/05 10:15 PM >>>
Hi,
I'm looking to install a TSM client for a linux server. From the set of
cds
I have I've found
a. TSM for Unix Clients (non-AIX)
b. TSM for Linux

On the first cd there's a readme file that says it contains TSM for
OS/400.
So that's ruled out.
The second cd readme file has the following paragraph:

3) RPM Packages included:

IA32
TIVsm-server-5.2.2-0.i386.rpm ...TSM Server
TIVsm-stagent-5.2.2-0.i386.rpm ..TSM Storage Agent - for
LAN-free backups
TIVsm-tsmscsi-5.2.2-0.i386.rpm ..TSM SCSI Device Drivers for
non-IBM devices

s390(31bit zSeries):
TIVsm-server-5.2.2-0.s390.rpm ...TSM Server
TIVsm-stagent-5.2.2-0.s390.rpm ..TSM Storage Agent - for
LAN-free backups
TIVsm-tsmscsi-5.2.2-0.s390.rpm ..TSM SCSI Device Drivers for
non-IBM devices

s390x(64bit zSeries):
TIVsm-server-5.2.2-0.s390x.rpm ..TSM Server
TIVsm-stagent-5.2.2-0.s390x.rpm .TSM Storage Agent - for
LAN-free backups
TIVsm-tsmscsi-5.2.2-0.s390x.rpm .TSM SCSI Device Drivers for
non-IBM devices

ppc64(64bit pSeries):
TIVsm-server-5.2.2-0.ppc64.rpm ..TSM Server
TIVsm-stagent-5.2.2-0.ppc64.rpm .TSM Storage Agent - for
LAN-free backups
TIVsm-tsmscsi-5.2.2-0.ppc64.rpm .TSM SCSI Device Drivers for
non-IBM devices

Architecture independent pkgs:
TIVsm-webadmin-5.2.2-0.noarch.rpm ...TSM Server Web
Administrative
Interface
TIVsm-webhelpen_US-5.2.2-0.noarch.rpm ...TSM Server Web
Administrative
Interface

As far as I understand there's no Linux client included there either.

Could someone point me to the right direction ? Thanks

Yiannakis Vakis
Systems Support Group, I.T.Division
Tel. 22-848523, 99-414788, Fax. 22-337770


Client for Linux

2005-07-14 Thread Yiannakis Vakis
Hi,
I'm looking to install a TSM client for a linux server. From the set of cds
I have I've found
a. TSM for Unix Clients (non-AIX)
b. TSM for Linux

On the first cd there's a readme file that says it contains TSM for OS/400.
So that's ruled out.
The second cd readme file has the following paragraph:

3) RPM Packages included:

IA32
TIVsm-server-5.2.2-0.i386.rpm ...TSM Server
TIVsm-stagent-5.2.2-0.i386.rpm ..TSM Storage Agent - for
LAN-free backups
TIVsm-tsmscsi-5.2.2-0.i386.rpm ..TSM SCSI Device Drivers for
non-IBM devices

s390(31bit zSeries):
TIVsm-server-5.2.2-0.s390.rpm ...TSM Server
TIVsm-stagent-5.2.2-0.s390.rpm ..TSM Storage Agent - for
LAN-free backups
TIVsm-tsmscsi-5.2.2-0.s390.rpm ..TSM SCSI Device Drivers for
non-IBM devices

s390x(64bit zSeries):
TIVsm-server-5.2.2-0.s390x.rpm ..TSM Server
TIVsm-stagent-5.2.2-0.s390x.rpm .TSM Storage Agent - for
LAN-free backups
TIVsm-tsmscsi-5.2.2-0.s390x.rpm .TSM SCSI Device Drivers for
non-IBM devices

ppc64(64bit pSeries):
TIVsm-server-5.2.2-0.ppc64.rpm ..TSM Server
TIVsm-stagent-5.2.2-0.ppc64.rpm .TSM Storage Agent - for
LAN-free backups
TIVsm-tsmscsi-5.2.2-0.ppc64.rpm .TSM SCSI Device Drivers for
non-IBM devices

Architecture independent pkgs:
TIVsm-webadmin-5.2.2-0.noarch.rpm ...TSM Server Web Administrative
Interface
TIVsm-webhelpen_US-5.2.2-0.noarch.rpm ...TSM Server Web Administrative
Interface

As far as I understand there's no Linux client included there either.

Could someone point me to the right direction ? Thanks

Yiannakis Vakis
Systems Support Group, I.T.Division
Tel. 22-848523, 99-414788, Fax. 22-337770


Your note to me.

2005-07-14 Thread Morten Kongsted
I will be out of the office starting  15-07-2005 and will not return until
01-08-2005.

In this period I might not get a chance to read my mail.

If you cannot wait for a reply until my return, please send your request to
[EMAIL PROTECTED] for a more prompt response.

Otherwise I will respond to your message when I return or if I get a chance
to look at it during my absence.


Openafs to tsm

2005-07-14 Thread Bobby Cheema
Hi All

I am trying to setup afs to backup volumes to TSM server

setup

sun Fire v100 running solaris 10
openafs 1.3.85 compiled with --enable-tivoli
SUN TSM client ver 5.3 TSM SERVER Runs 5.2.2

Problem

I can sucessfully backup my volumes using butc to TSM server However when i try
to restore the vols. butc dies with seg fault
I enabled trace on xbsa libs and attaching the logs here. It appears that the
connection dies with error 2200 which  according to xbsa libs means
#define DSM_RC_MORE_DATA   2200 /* There are more data to restore   */

somehow it appers that butc is leaving the field without even playing the
match.




bash-3.00# cat /TRACE
TSM Trace   IBM Tivoli Storage Manager 5.3.0.0
Build Date:  Tue Dec  7 10:19:17 2004
BEGINNING NEW TRACE

07/14/2005 14:07:37.634 : trace.cpp   (2007): Tracing to file: //TRACE
07/14/2005 14:07:37.635 : trace.cpp   (2008): Tracefile maximum length 
set to 0 MB.
07/14/2005 14:07:37.635 : trace.cpp   (2020): 

07/14/2005 14:07:37 - Trace begun.
07/14/05   14:07:37.639 : dsmsetup.cpp( 732): dsmInit ENTRY: mtFlag is 0
07/14/05   14:07:37.639 : dsmsetup.cpp( 733): dsmiDir is
>/opt/tivoli/tsm/client/api/bin<
07/14/05   14:07:37.639 : dsmsetup.cpp( 734): dsmiConfig is 
>/usr/bin/dsm.opt<
07/14/05   14:07:37.639 : dsmsetup.cpp( 735): dsmiLog is><
07/14/05   14:07:37.639 : dsmsetup.cpp( 736): logName is
>dsierror.log<
07/14/05   14:07:37.639 : dsmsetup.cpp( 800): ApiSetUp : completed 
successfully
07/14/05   14:07:37.641 : dsminit.cpp (1907): dsmInit ENTRY:
07/14/05   14:07:37.641 : dsminit.cpp (1909): caller's ver/rel/lev = 
5/3/0/0  library's ver/rel/lev = 5/3/0/0.
07/14/05   14:07:37.642 : dsminit.cpp (1915): applType : >TSMXOPEN 
SOL26<, configfile : ><, options ><
07/14/05   14:07:39.690 : dsminit.cpp (2141): dsmInit Session started 
Handle = 1. Use TrustedAgent = false.
07/14/05   14:07:39.690 : dsminit.cpp (2144): dsmInit: node parm = 
>afs-1.tmklabs<.
07/14/05   14:07:39.690 : dsminit.cpp (2147):  owner parm = 
>NULL<.
07/14/05   14:07:39.690 : dsminit.cpp (2150): 
ArchiveRetentionProtection: client= NO server= NO.
07/14/05   14:07:39.690 : dsminit.cpp (2154): useTsmBuffers = >0< 
numTsmBuffers >0<
07/14/05   14:07:39.690 : dsminit.cpp (2156): enableClientEncrKey = NO 
encryptKeyEnabled = NO
07/14/05   14:07:39.690 : dsminit.cpp (2159): useUnicode= NO, 
crossPlatform= NO
07/14/05   14:07:39.700 : dsminit.cpp (1227): dsmInit EXIT: rc = >0<.
07/14/05   14:07:39.700 : dsmsess.cpp ( 414): dsmQuerySessInfo ENTRY: 
dsmHandle=1, apiInfoP:>ffbf97b4<
07/14/05   14:07:39.700 : dsmsess.cpp ( 484): dsmQuerySessInfo: 
completed
07/14/05   14:07:39.700 : dsmsess.cpp ( 485): dsmQuerySessInfo: 
Server's ver/rel/lev = 5/2/2/0
07/14/05   14:07:39.700 : dsmsess.cpp ( 488): dsmQuerySessInfo: 
ArchiveRetentionProtection : No
07/14/05   14:07:39.700 : dsmsess.cpp ( 495): dsmQuerySessInfo EXIT: rc 
= >0<.
07/14/05   14:07:39.700 : dsmsess.cpp ( 414): dsmQuerySessInfo ENTRY: 
dsmHandle=1, apiInfoP:>ffbfbca4<
07/14/05   14:07:39.700 : dsmsess.cpp ( 484): dsmQuerySessInfo: 
completed
07/14/05   14:07:39.700 : dsmsess.cpp ( 485): dsmQuerySessInfo: 
Server's ver/rel/lev = 5/2/2/0
07/14/05   14:07:39.700 : dsmsess.cpp ( 488): dsmQuerySessInfo: 
ArchiveRetentionProtection : No
07/14/05   14:07:39.700 : dsmsess.cpp ( 495): dsmQuerySessInfo EXIT: rc 
= >0<.
07/14/05   14:07:48.054 : dsmquery.cpp( 591): dsmBeginQuery ENTRY: 
dsmHandle=1 queryType: 1
07/14/05   14:07:48.055 : dsmquery.cpp( 630): dsmBeginQuery: function 
started fs= >/backup_afs_volume_dumps< hl= >/1121293604< ll= >/test.test<
07/14/05   14:07:48.055 : dsmquery.cpp(1441): BeginQueryBackup: node 
name used = >AFS-1.TMKLABS< owner = ><
07/14/05   14:07:48.056 : dsmquery.cpp( 766): dsmBeginQuery for Backup
07/14/05   14:07:48.056 : dsmquery.cpp( 788): dsmBeginQuery EXIT: rc = 
>0<.
07/14/05   14:07:48.056 : dsmnextq.cpp(1033): dsmGetNextQObj ENTRY: 
dsmHandle=1 dataBlkPtr: fe2f5f5c
07/14/05   14:07:48.056 : dsmnextq.cpp(1061): dsmGetNextQObj for 
qtTsmBackup
07/14/05   14:07:48.168 : cuqrepos.cpp(2935): cuGetBackQryResp: ver372 
server using BackQryRespEnhanced3
07/14/05   14:07:48.168 : cuqrepos.cpp(3216): ApiNetToAttrib: Major 
Version=7, Minor Version=9, Client Type=2
07/14/05   14:07:48.168 : cuqrepos.cpp(3255): ApiNetToAttrib: obj 
compressed: >NO< encrypt type :>NO< encryptAlg >UNKNOWN<
07/14/05   14:07:48.168 : dsmnextq.cpp(1693): apicuGetBackQryResp: 
owner >< Name fs=>/backup_afs_volume_dumps< hl=>/1121293604< ll=>/test.test< 
state >1< id hi:0 lo:1562439716
07/14/05   14:07:48.168 : dsmnextq.cpp(1266): d

Dominique Costantini/ITServices/Unicible est absent(e).

2005-07-14 Thread Dominique Costantini
Je serai absent(e) du  14.07.2005 au 18.07.2005.

Je répondrai à votre message dès mon retour. Voir avec GAD si besoin.Merci.

Re: TDP for Exchange 5.2.1.0 Policy clarification

2005-07-14 Thread Del Hoobler
Steve,

All backups of the same name (type) will have the same management class.
You cannot back up "certain" full backups and direct them to
a different management class. All previous full backups will get
rebound to management class for that invocation. It works the same
way as the base file-level client.

COPY backups are named differently, thus can be bound by name to
a different management class. You need to use COPY backups to bind
them to a different management class.. otherwise... the previous
full backups which you think are being kept forever... won't be.

Thanks,

Del




"ADSM: Dist Stor Manager"  wrote on 07/14/2005
04:10:15 PM:

> I have a separate command script for the weekly/monthly fulls that
directs
> the data to a special mgmtclass.  I really want to avoid defining a 3rd
> nodename for these machines.
>
> What is your reason for using "COPY" backups instead of fulls?  Since
both
> the expiring and non-expiring fulls are available to use for restore,
why
> not commit the logs?
>
> Thanks,
> -steve
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Del
> Hoobler
> Sent: Thursday, July 14, 2005 3:12 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] TDP for Exchange 5.2.1.0 Policy clarification
>
> Steve,
>
> The Daily and Hourly settings look fine to me.
>
> How are you performing your monthly/weekly "fulls"?
> You should either make them "COPY" type backups and bind the "COPY"
backups
> to a different management class with the policy settings with
> NOLimit/NOLimit/NOLimit/NOLimit or use a different NODENAME for those
> backups, and make the full backups use the policy settings with
> NOLimit/NOLimit/NOLimit/NOLimit
>
> Thanks,
>
> Del
>
> 
>
> "ADSM: Dist Stor Manager"  wrote on 07/14/2005
> 12:01:07 PM:
>
> > TSM server 5.2.4.3
> >
> > We are changing some policies for our Exchange environment and after
> reading
> > the TDP user guide I am questioning if my previous assumptions are
> correct.
> >
> > We were planning on using the following policy:
> >
> > Daily fulls expiring after 5 days
> > Hourly incrementals expiring after 3 days Weekly or Monthly (dependant
> > on each server's deleted item retention)
> fulls
> > that never expire
> >
> > Is this a valid policy, assuming that any full backup older than 3
> > days
> is
> > basically a "snapshot", and not usable for bringing data back to the
> minute?
> >
> > Is this a valid backup copy group setting
> (VerExist/VerDel/RetXtra/RetOnly):
> >
> >Fulls - nl/nl/5/5
> >Incr - nl/nl/3/3
> >
> > thanks.
> >
> > Steve Schaub
> Please see the following link for the BlueCross BlueShield of Tennessee
E-mail
> disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: TDP for Exchange 5.2.1.0 Policy clarification

2005-07-14 Thread Steve Schaub
I have a separate command script for the weekly/monthly fulls that directs
the data to a special mgmtclass.  I really want to avoid defining a 3rd
nodename for these machines.

What is your reason for using "COPY" backups instead of fulls?  Since both
the expiring and non-expiring fulls are available to use for restore, why
not commit the logs?

Thanks,
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Del
Hoobler
Sent: Thursday, July 14, 2005 3:12 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP for Exchange 5.2.1.0 Policy clarification

Steve,

The Daily and Hourly settings look fine to me.

How are you performing your monthly/weekly "fulls"?
You should either make them "COPY" type backups and bind the "COPY" backups
to a different management class with the policy settings with
NOLimit/NOLimit/NOLimit/NOLimit or use a different NODENAME for those
backups, and make the full backups use the policy settings with
NOLimit/NOLimit/NOLimit/NOLimit

Thanks,

Del



"ADSM: Dist Stor Manager"  wrote on 07/14/2005
12:01:07 PM:

> TSM server 5.2.4.3
>
> We are changing some policies for our Exchange environment and after
reading
> the TDP user guide I am questioning if my previous assumptions are
correct.
>
> We were planning on using the following policy:
>
> Daily fulls expiring after 5 days
> Hourly incrementals expiring after 3 days Weekly or Monthly (dependant
> on each server's deleted item retention)
fulls
> that never expire
>
> Is this a valid policy, assuming that any full backup older than 3
> days
is
> basically a "snapshot", and not usable for bringing data back to the
minute?
>
> Is this a valid backup copy group setting
(VerExist/VerDel/RetXtra/RetOnly):
>
>Fulls - nl/nl/5/5
>Incr - nl/nl/3/3
>
> thanks.
>
> Steve Schaub
Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: ANR0670E

2005-07-14 Thread fred johanson

Richard,

Right obout the message number.  My eyes slipped during copying.  But here
are some logs:

First, export node definitions.

07/14/2005 14:04:21  ANR2017I Administrator FRED issued command: EXPORT NODE
  ssdlbc-pc16 do=desktop-support tos=itsm  (SESSION: 9472)

07/14/2005 14:04:22  ANR0617I EXPORT NODE: Processing completed with status
  SUCCESS. (SESSION: 9472, PROCESS: 42)

07/14/2005 14:04:22  ANR0986I Process 42 for EXPORT NODE running in the
  BACKGROUND processed 2 items for a total of 377 bytes
  with a completion state of SUCCESS at 14:04:22.
(SESSION:
  9472, PROCESS: 42)

Which produced this import on the receiver.


07/14/2005 14:04:21  ANR0984I Process 6 for IMPORT (from Server TSM)
started in
  the BACKGROUND at 14:04:21. (SESSION: 223, PROCESS: 6)
07/14/2005 14:04:21  ANR4711I IMPORT SERVER (DATES=ABSOLUTE REPLACEDEFS=NO
  MERGE=NO PREVIEW=NO) by administrator FRED from server
  TSM (Process 42) starting as process 6. (SESSION: 223,
  PROCESS: 6)
07/14/2005 14:04:21  ANR0610I IMPORT (from Server TSM) started by FRED as
  process 6. (SESSION: 223, PROCESS: 6)
07/14/2005 14:04:21  ANR0615I IMPORT (from Server TSM): Reading EXPORT NODE
  data from server TSM exported 07/14/05 14:04:21.
  (SESSION: 223, PROCESS: 6)
07/14/2005 14:04:21  ANR0635I IMPORT (from Server TSM): Processing node
  SSDLBC-PC16 in domain DESKTOP-SUPPORT. (SESSION: 223,
  PROCESS: 6)
07/14/2005 14:04:21  ANR0617I IMPORT (from Server TSM): Processing completed
  with status SUCCESS. (SESSION: 223, PROCESS: 6)

07/14/2005 14:04:21  ANR0629I IMPORT (from Server TSM): Copied 34 bytes of
  data. (SESSION: 223, PROCESS: 6)
07/14/2005 14:04:21  ANR0611I IMPORT (from Server TSM) started by FRED as
  process 6 has ended. (SESSION: 223, PROCESS: 6)
07/14/2005 14:04:21  ANR0988I Process 6 for IMPORT (from Server TSM)
running in
  the BACKGROUND processed 34 bytes with a completion
state
  of SUCCESS at 14:04:21. (SESSION: 223, PROCESS: 6)


Then the export with filespaces.

  (PROCESS: 38)
07/14/2005 14:05:54  ANR2017I Administrator FRED issued command: EXPORT NODE
  ssdlbc-pc16 do=desktop-support tos=itsm merge=yes
  filed=all  (SESSION: 9472)
07/14/2005 14:05:54  ANR0984I Process 44 for EXPORT NODE started in the
  BACKGROUND at 14:05:54. (SESSION: 9472, PROCESS: 44)
07/14/2005 14:05:54  ANR0609I EXPORT NODE started as process 44. (SESSION:
  9472, PROCESS: 44)
07/14/2005 14:05:54  ANR0402I Session 9483 started for administrator FRED
  (Server) (Memory IPC). (SESSION: 9472, PROCESS: 44)
07/14/2005 14:05:54  ANR0408I Session 9484 started for server FRED
  (AIX-RS/6000) (Tcp/Ip) for server registration.
  (SESSION: 9472, PROCESS: 44)
07/14/2005 14:05:54  ANR0610I EXPORT NODE started by FRED as process 44.
  (SESSION: 9472, PROCESS: 44)
07/14/2005 14:06:28  ANR8216W Error sending data on socket 112.  Reason 32.
  (SESSION: 9472, PROCESS: 44)
07/14/2005 14:06:28  ANR0569I Object not processed for SSDLBC-PC16:
  type=Backup, file space=\\ssdlbc-pc16\d$, object=\SYSTEM
  VOLUME
INFORMATION\_RESTORE{A10B9F26-F98C-4DB8-8457-B394-
  AD4954F2}\ RP43. (SESSION: 9472, PROCESS: 44)
07/14/2005 14:06:28  ANR8216W Error sending data on socket 112.  Reason 32.
  (SESSION: 9472, PROCESS: 44)
07/14/2005 14:06:28  ANR8216W Error sending data on socket 112.  Reason 32.
  (SESSION: 9472, PROCESS: 44)
07/14/2005 14:06:28  ANR0568W Session 9484 for admin FRED (AIX-RS/6000)
  terminated - connection with client severed. (SESSION:
more...   ( to continue, 'C' to cancel)

  9472, PROCESS: 44)
07/14/2005 14:06:28  ANR8216W Error sending data on socket 112.  Reason 32.
  (SESSION: 9472, PROCESS: 44)
07/14/2005 14:06:28  ANR0794E EXPORT NODE: Processing terminated abnormally -
  error accessing data storage. (SESSION: 9472, PROCESS:
  44)
07/14/2005 14:06:28  ANR0891I EXPORT NODE: Copied 1 optionset definitions.
  (SESSION: 9472, PROCESS: 44)
07/14/2005 14:06:28  ANR0626I EXPORT NODE: Copied 1 node definitions. (SESSION:
  9472, PROCESS: 44)
07/14/2005 14:06:28  ANR0627I EXPORT NODE: Copied 4 file spaces 0 archive
  files, 283 backup files, and 0 space managed files.
  (SESSION: 9472, PROCESS: 44)
07/14/2005 14

Re: TDP for Exchange 5.2.1.0 Policy clarification

2005-07-14 Thread Del Hoobler
Steve,

The Daily and Hourly settings look fine to me.

How are you performing your monthly/weekly "fulls"?
You should either make them "COPY" type backups and
bind the "COPY" backups to a different management class
with the policy settings with NOLimit/NOLimit/NOLimit/NOLimit
or use a different NODENAME for those backups, and make the full
backups use the policy settings with NOLimit/NOLimit/NOLimit/NOLimit

Thanks,

Del



"ADSM: Dist Stor Manager"  wrote on 07/14/2005
12:01:07 PM:

> TSM server 5.2.4.3
>
> We are changing some policies for our Exchange environment and after
reading
> the TDP user guide I am questioning if my previous assumptions are
correct.
>
> We were planning on using the following policy:
>
> Daily fulls expiring after 5 days
> Hourly incrementals expiring after 3 days
> Weekly or Monthly (dependant on each server's deleted item retention)
fulls
> that never expire
>
> Is this a valid policy, assuming that any full backup older than 3 days
is
> basically a "snapshot", and not usable for bringing data back to the
minute?
>
> Is this a valid backup copy group setting
(VerExist/VerDel/RetXtra/RetOnly):
>
>Fulls - nl/nl/5/5
>Incr - nl/nl/3/3
>
> thanks.
>
> Steve Schaub


Re: ANR0670E

2005-07-14 Thread Richard Sims

Fred - That quoted message number is not consistent with known messages:
   I think you mean ANR0687E: double-check that.
Could you paste in the actual message, and indicate which server it is
showing up on - sender or receiver?
There are no other messages in EITHER server's Activity Log?
Seeing your actual command may help.
Does a preview type invocation fail also?
I would check the database and recovery log capacity on both systems.

You may have to refer this to TSM Support, ultimately.

   Richard Sims

On Jul 14, 2005, at 1:47 PM, fred johanson wrote:


Doing a server to server export, 5.2.2.5 to 5.3.1.2, the process dies
almost immediately, with ANR0670E, Transaction failure - could not
commit
DB transaction.  I've tried 2 different nodes, with a different
filespace
mix, but with the same results.  Export to another machine, also
5.3.1.2,
does work.  Server definitions and admin definitions are
identical.  Help
for ANR0670E says to look for preceding messages, but there are none.

Anyone seen this?



Re: Long term data retention for retired clients

2005-07-14 Thread Prather, Wanda
Biggest problem I have with using EXPORT or BACKUPSET, is that people
most likely are going to ask for a partial restore.  And they probably
aren't going to remember exactly what the directory structure &
filenames were.  Two years from now, NO ONE will have any idea what was
really on that EXPORT tape.  And you can't hunt for it effectively
unless all the data is still in the DB.

So what I've done as a compromise is use SQL SELECT to pull a list of
all the file names/backup dates for a retired client from the TSM DB
into a flat file, then do the EXPORT and delete the filespaces.  The
flat file remains around and can be searched using ordinary tools to
figure out what is on the EXPORT tape.

Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Allen S. Rout
Sent: Thursday, July 14, 2005 1:49 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Long term data retention for retired clients


==> On Thu, 14 Jul 2005 10:27:49 +0100, John Naylor
<[EMAIL PROTECTED]> said:

> I have consideredf various approaches
> 1) Export
> 2) Backup set
> 3) Create a new domain for retired clients which have the long term
> retention requirement

> I see export and backup sets as reducing database overhead, but being
less
> easy to track and rather unfriendly if you just need to get a subset
of
> the data back

Export yes, backupset no; [ see below ]


> The new domain would retain great visibilty for the data and allow
easy
> access to subsets of the data, but you would stiil have the database
> overhead.

Yes, but in the long term this overhead devolves to just space.  For
example,
if everything in the node is ACTIVE, I don't believe that it represents
much
of an e.g. expiration hit.  So there's an incrementally (hah) larger
full DB
backup, and more space on disk for the DB, but not a lot of day to day
DB
overhead.

> Does the choice just depend, on how often in reality you will need to
get
> the  data back ?

I'd say that and how frequently you want to do the retention thing, for
how
long.

Off the top of my head, if keeping the bits and pieces around would add
up to,
say, a third of my [DB space / data / whatever] I'd consider doing
something
nearline with it.


[below]

Keep in mind:  you can restore from a backupset stored on the server;
this
need only be a little less convenient than restorations from online
data.
Gedankenexperiment:

Say you want to deal with TheNode:

rename node TheNode OLD-2005-07-12-13-TheNode

   [ in case you ever want to use TheNode name again ]

GEN BACKUPSET OLD-2005-07-12-13-TheNode Terminal devclass=foobar
RET=NOLimit

which usess tapes FOO1 and FOO2.

Then you DEL FILESPACE OLD-2005-07-12-13-TheNode *

and

CHECKOUT LIBVOLUME FOOLIB FOO1
CHECKOUT LIBVOLUME FOOLIB FOO2

At this point, you've got a -permanent- record of the state of TheNode,
at the
cost of a few records in the node and backupset tables. Of course, you
have an
increased exposure to media failure: no copypools for backupsets.
Anyway, to
restore from it all you have to do to use it is check the tapes back in,
and
issue a

dsmc restore backupset Terminal -loc=server [sourcespec] [destspec]

This is going to be a much less efficient restore than the online one,
but
only in wall clock time and tape use, not in human skull sweat.


Plus, if someone gets crotchety about the archive, you can hand them the
checked out tapes and tell them to get their own LTO3. (heh)


- Allen S. Rout


Re: Long term data retention for retired clients

2005-07-14 Thread Allen S. Rout
==> On Thu, 14 Jul 2005 08:45:11 -0400, Richard Rhodes <[EMAIL PROTECTED]> said:


> If there was one thing I really wish in all this was a comments field.  The
> only place we found to put comments about a node is in the contacts field.
> I wish there was another field where we could enter comments.

AMEN, brother.  Preach it!

> I am interested in how others handle this also.

Heh, I've been thinking about a white paper on just the topic:  "What they
left out, and what I did about it". I may just write it.

The short version is: I built an XML 'application' (dialect) to hold a bunch
of data about my servers, schedules, domains, storage pools and nodes.  I
generate all of my automation scripts by distilling that one big (dang, it's
50K now) file, and get out of it all the normal maintainance scripts and
schedules for my 10 servers, chargeback accounting, and trending data.

Oh, and my automatically-generated DR-restore-my-TSM-server shell scripts. :)

Most of my needs could have been filled with a ~1K "comments" field on:

nodes
domains
filespaces
stgpools



- Allen S. Rout


Re: Long term data retention for retired clients

2005-07-14 Thread Allen S. Rout
==> On Thu, 14 Jul 2005 10:27:49 +0100, John Naylor <[EMAIL PROTECTED]> said:

> I have consideredf various approaches
> 1) Export
> 2) Backup set
> 3) Create a new domain for retired clients which have the long term
> retention requirement

> I see export and backup sets as reducing database overhead, but being less
> easy to track and rather unfriendly if you just need to get a subset of
> the data back

Export yes, backupset no; [ see below ]


> The new domain would retain great visibilty for the data and allow easy
> access to subsets of the data, but you would stiil have the database
> overhead.

Yes, but in the long term this overhead devolves to just space.  For example,
if everything in the node is ACTIVE, I don't believe that it represents much
of an e.g. expiration hit.  So there's an incrementally (hah) larger full DB
backup, and more space on disk for the DB, but not a lot of day to day DB
overhead.

> Does the choice just depend, on how often in reality you will need to get
> the  data back ?

I'd say that and how frequently you want to do the retention thing, for how
long.

Off the top of my head, if keeping the bits and pieces around would add up to,
say, a third of my [DB space / data / whatever] I'd consider doing something
nearline with it.


[below]

Keep in mind:  you can restore from a backupset stored on the server; this
need only be a little less convenient than restorations from online data.
Gedankenexperiment:

Say you want to deal with TheNode:

rename node TheNode OLD-2005-07-12-13-TheNode

   [ in case you ever want to use TheNode name again ]

GEN BACKUPSET OLD-2005-07-12-13-TheNode Terminal devclass=foobar RET=NOLimit

which usess tapes FOO1 and FOO2.

Then you DEL FILESPACE OLD-2005-07-12-13-TheNode *

and

CHECKOUT LIBVOLUME FOOLIB FOO1
CHECKOUT LIBVOLUME FOOLIB FOO2

At this point, you've got a -permanent- record of the state of TheNode, at the
cost of a few records in the node and backupset tables. Of course, you have an
increased exposure to media failure: no copypools for backupsets.  Anyway, to
restore from it all you have to do to use it is check the tapes back in, and
issue a

dsmc restore backupset Terminal -loc=server [sourcespec] [destspec]

This is going to be a much less efficient restore than the online one, but
only in wall clock time and tape use, not in human skull sweat.


Plus, if someone gets crotchety about the archive, you can hand them the
checked out tapes and tell them to get their own LTO3. (heh)


- Allen S. Rout


ANR0670E

2005-07-14 Thread fred johanson

Doing a server to server export, 5.2.2.5 to 5.3.1.2, the process dies
almost immediately, with ANR0670E, Transaction failure - could not commit
DB transaction.  I've tried 2 different nodes, with a different filespace
mix, but with the same results.  Export to another machine, also 5.3.1.2,
does work.  Server definitions and admin definitions are identical.  Help
for ANR0670E says to look for preceding messages, but there are none.

Anyone seen this?



Fred Johanson
ITSM Administrator
University of Chicago
773-702-8464


TDP for Exchange 5.2.1.0 Policy clarification

2005-07-14 Thread Steve Schaub
TSM server 5.2.4.3

We are changing some policies for our Exchange environment and after reading
the TDP user guide I am questioning if my previous assumptions are correct.

We were planning on using the following policy:

Daily fulls expiring after 5 days
Hourly incrementals expiring after 3 days
Weekly or Monthly (dependant on each server's deleted item retention) fulls
that never expire

Is this a valid policy, assuming that any full backup older than 3 days is
basically a "snapshot", and not usable for bringing data back to the minute?

Is this a valid backup copy group setting (VerExist/VerDel/RetXtra/RetOnly):

   Fulls - nl/nl/5/5
   Incr - nl/nl/3/3

thanks.

Steve Schaub
Systems Engineer, WNI
BlueCross BlueShield of Tennessee
423-752-6574 (desk)
423-785-7347 (cell)

Please see the following link for the BlueCross BlueShield of Tennessee E-mail
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Long term data retention for retired clients

2005-07-14 Thread Nicholas Cassimatis
Remember - moving the nodes to a new domain does nothing to rebind the data
to new management classes - that only happens during an actual backup.  So
the data will stick around for as long as intended, except for the last
active version of the files.  Those you have to delete manually.  One
practice I've done is to rename the node, putting the date when it can be
finally deleted to the beginning of the name, hence "NODENAME" becomes
"051231_NODENAME".  Moving this node to a new domain, with a descriptive
name such as RETIRED, you can now easily see what nodes are able to be
deleted, and when.  The simple version:  "query node domain=retired" or the
more complex: "select node_name from nodes where domain='RETIRED' order by
node_name"

Nick Cassimatis
email: [EMAIL PROTECTED]

Re: 3584 I/O port query

2005-07-14 Thread Mark Bertrand
No Matt, I agree with David's approach, never leave anything to humans that 
scripts can do better.

Here is what I posted back in December on the subject, with help from other 
members. I am working with Windows, but I'm sure you can modify it for AIX.

Feel free to write me directly if you have any questions.
Good Luck.

--

OK here is what I found in my adventures to automate the status of my 10
slot I/O of my 3584.

I am no expert, just wanted to share. Also, my environment is W2K, TSM
5.1.6.3.
I have read all the postings on this subject and here is what I found. First
Richard Cowen had posted the best document associated with this tool, here:
http://msgs.adsm.org/cgi-bin/get/adsm0202/767.html period. Richard credits
Joel Fuhrman of washington.edu.for the document.

In my experience I could not get return_lib_inventory to work. I tried
everything with no luck.
SYNTAX: return_lib_inventory dno=  sno=  eeno=  tno=
e.g. return_lib_inventory dno=2 sno=3 eeno=1 tno=1 Where dno = number of
drives sno = number of storage slots eeno = number of entry/exit ports tno =
number of transport elements

So I just used return_lib_inventory_all and used grep to pull out only the
I/O slot info, no big deal.

OK, so here is the meat of the script, I am sure any of you reading this can
write some cool stuff around this to meet your needs.

First to launch lbtest in batch mode use -dev for device name input, which I
found using the TSM MMC plugin on my Windows server under TSM Device Drive.
Reports, Device Information.
Also use the -f for the batch part of the script, this is what tells lbtest
what to do once it is launched. Don't worry about specifying an output file,
this will automatically use lbtest.out in the launched from directory. Also
great for troubleshooting syntax problems.

cd c:\Program Files\tivoli\tsm\Server
lbtest -dev lb0.1.0.1 -f lbtest.in

Here is the .in file lbtest.in:
command open $D
command return_elem_count
command return_lib_inventory_all
command close

Even though you specify the device, you still need to open it using the
command open. I used $D which is an acceptable variable. Don't forget to
close when complete. If the script fails then you will need to enter lbtest
in manual mode and close the device. Also, if you make any changes to your
.in file, you must completely exit the lbtest app before it will read the
changes.

That's it, put that in a batch file, use a couple of redirects > to an out
file a few greps, awks and if statements with a command line mail utility
and you can do some pretty cool stuff.

I will now have it check the I/O for tapes before checkout, also run a
little batch file on schedule to send me an email when full. This is great
for those of us admins who are our whole TSM shop.

Let me know directly if you need more detail, but I think this is just about
it. I know lbtest can do much more, but this met my needs, the link to the
document from Richard and Joel was a big help.

Thanks all for getting me pointed in the right direction.
Mark Bertrand
--




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
David Longo
Sent: Thursday, July 14, 2005 9:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: 3584 I/O port query


I have TSM server 5.2.3.0 and 3584 library with 10 slot I/O port.  Have Atape
driver 8.4.8.0, going to 9.3.5.0. next week and TSM server 5.2.5.0 (for new
LTO3 drives).  Server platform is AIX 5.2.

My problem is that my Operators do not always process tapes back in correctly
through DRM.  They will miss the "mode drmedia" command on a returning tape
then try to check it in.  Of course they don't see message on TSM Console.

A tape will be left in I/O port and next day when they check out 10 tapes,
one is left in library as there are only 9 slots in I/O.

There are a couple of other ways tapes can get left or forgotten.

I figure someone has a script to query the I/O port (probably with tapeutil)
and send page/email to OPS if the count is greater than 0.
I can do the page part, just need to know how to query in script
and determine there are tapes in I/O port.

Thanks,

David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


##
This message is for the named person's use only.  It may 
contain confidential, proprietary, or legally privileged 
information.  No confidentiality or privilege is waived or 
lost by any mistransmission.  If you receive this message 
in error, please immediately delete it and all copies of it 
from your system, destroy any hard copies of it, and notify 
the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any p

Re: Long term data retention for retired clients

2005-07-14 Thread Thomas Denier
> If there was one thing I really wish in all this was a comments field.  The
> only place we found to put comments about a node is in the contacts field.
> I wish there was another field where we could enter comments.

The 'define machine' and 'insert machine' commands can be used to store
large amounts of text data about nodes. I am not sure about the
licensing requirements for these commands; they may be part of the
Disaster Recovery Manager feature.


Re: 3584 I/O port query

2005-07-14 Thread Warren, Matthew (Retail)
I tend to approach these situations by trying to ensure the correct
procedures are followed. Otherwise you end up putting various checks and
steps in to cope with a failing part (humans) of a system. Could the
fallability of your operators be addressed? simple tools like signed-off
checklists and such have worked wonders for me in the past.


Matt.

[EMAIL PROTECTED]
[EMAIL PROTECTED]
http://tsmwiki.com/tsmwiki/MatthewWarren



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Loo, M. van 't - SPLXM
Sent: Thursday, July 14, 2005 3:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: 3584 I/O port query

The way I did it was:

At the end of a day, I did a checkin of the volumes in the I/O and a
update
of the Access to ReadWrite, so the tape can be used another night.
So in stead of leaving the partly filled tapes in the I/O, I fill them
with
more data, and check them out the next morning by the new move drmedia.

Regards,
Maurice

-Original Message-
From:   David Longo
[mailto:[EMAIL PROTECTED]
Sent:   donderdag 14 juli 2005 16:47
To: ADSM-L@VM.MARIST.EDU
Subject:[ADSM-L] 3584 I/O port query

I have TSM server 5.2.3.0 and 3584 library with 10 slot
I/O
port.  Have Atape
driver 8.4.8.0, going to 9.3.5.0. next week and TSM
server
5.2.5.0 (for new
LTO3 drives).  Server platform is AIX 5.2.

My problem is that my Operators do not always process
tapes
back in correctly
through DRM.  They will miss the "mode drmedia" command
on a
returning tape
then try to check it in.  Of course they don't see
message
on TSM Console.

A tape will be left in I/O port and next day when they
check
out 10 tapes,
one is left in library as there are only 9 slots in I/O.

There are a couple of other ways tapes can get left or
forgotten.

I figure someone has a script to query the I/O port
(probably with tapeutil)
and send page/email to OPS if the count is greater than
0.
I can do the page part, just need to know how to query
in
script
and determine there are tapes in I/O port.

Thanks,

David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]



##
This message is for the named person's use only.  It may
contain confidential, proprietary, or legally privileged
information.  No confidentiality or privilege is waived
or
lost by any mistransmission.  If you receive this
message
in error, please immediately delete it and all copies of
it
from your system, destroy any hard copies of it, and
notify
the sender.  You must not, directly or indirectly, use,
disclose, distribute, print, or copy any part of this
message
if you are not the intended recipient.  Health First
reserves
the right to monitor all e-mail communications through
its
networks.  Any views or opinions expressed in this
message
are solely those of the individual sender, except (1)
where
the message states such views or opinions are on behalf
of
a particular entity;  and (2) the sender is authorized
by
the entity to give such views or opinions.

##


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail
or any attachment may be disclosed, copied or distributed, and that any
other action related to this e-mail or attachment is strictly
prohibited, and may be unlawful. If you have received this e-mail by
error, please notify the sender immediately by return e-mail, and delete
this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its
subsidiaries and/or its employees shall not be liable for the incorrect
or incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
**



___ Disclaimer Notice __
This message and any attachments are confidential and should on

Re: 3584 I/O port query

2005-07-14 Thread Loo, M. van 't - SPLXM
The way I did it was:

At the end of a day, I did a checkin of the volumes in the I/O and a update
of the Access to ReadWrite, so the tape can be used another night.
So in stead of leaving the partly filled tapes in the I/O, I fill them with
more data, and check them out the next morning by the new move drmedia.

Regards,
Maurice

-Original Message-
From:   David Longo [mailto:[EMAIL PROTECTED]
Sent:   donderdag 14 juli 2005 16:47
To: ADSM-L@VM.MARIST.EDU
Subject:[ADSM-L] 3584 I/O port query

I have TSM server 5.2.3.0 and 3584 library with 10 slot I/O
port.  Have Atape
driver 8.4.8.0, going to 9.3.5.0. next week and TSM server
5.2.5.0 (for new
LTO3 drives).  Server platform is AIX 5.2.

My problem is that my Operators do not always process tapes
back in correctly
through DRM.  They will miss the "mode drmedia" command on a
returning tape
then try to check it in.  Of course they don't see message
on TSM Console.

A tape will be left in I/O port and next day when they check
out 10 tapes,
one is left in library as there are only 9 slots in I/O.

There are a couple of other ways tapes can get left or
forgotten.

I figure someone has a script to query the I/O port
(probably with tapeutil)
and send page/email to OPS if the count is greater than 0.
I can do the page part, just need to know how to query in
script
and determine there are tapes in I/O port.

Thanks,

David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]



##
This message is for the named person's use only.  It may
contain confidential, proprietary, or legally privileged
information.  No confidentiality or privilege is waived or
lost by any mistransmission.  If you receive this message
in error, please immediately delete it and all copies of it
from your system, destroy any hard copies of it, and notify
the sender.  You must not, directly or indirectly, use,
disclose, distribute, print, or copy any part of this
message
if you are not the intended recipient.  Health First
reserves
the right to monitor all e-mail communications through its
networks.  Any views or opinions expressed in this message
are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of
a particular entity;  and (2) the sender is authorized by
the entity to give such views or opinions.

##


**
For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), 
its subsidiaries and/or its employees shall not be liable for the incorrect or 
incomplete transmission of this e-mail or any attachments, nor responsible for 
any delay in receipt.
**


3584 I/O port query

2005-07-14 Thread David Longo
I have TSM server 5.2.3.0 and 3584 library with 10 slot I/O port.  Have Atape
driver 8.4.8.0, going to 9.3.5.0. next week and TSM server 5.2.5.0 (for new
LTO3 drives).  Server platform is AIX 5.2.

My problem is that my Operators do not always process tapes back in correctly
through DRM.  They will miss the "mode drmedia" command on a returning tape
then try to check it in.  Of course they don't see message on TSM Console.

A tape will be left in I/O port and next day when they check out 10 tapes,
one is left in library as there are only 9 slots in I/O.

There are a couple of other ways tapes can get left or forgotten.

I figure someone has a script to query the I/O port (probably with tapeutil)
and send page/email to OPS if the count is greater than 0.
I can do the page part, just need to know how to query in script
and determine there are tapes in I/O port.

Thanks,

David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


##
This message is for the named person's use only.  It may 
contain confidential, proprietary, or legally privileged 
information.  No confidentiality or privilege is waived or 
lost by any mistransmission.  If you receive this message 
in error, please immediately delete it and all copies of it 
from your system, destroy any hard copies of it, and notify 
the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any part of this message
if you are not the intended recipient.  Health First reserves
the right to monitor all e-mail communications through its
networks.  Any views or opinions expressed in this message
are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of 
a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.
##


Re: TSM Client 5.3 Error ANS1028S when trying to backup directories with "EDC8134I Stale file handle"

2005-07-14 Thread Andrew Raibeck
What operating system are you using? I would recommend that you look up
the meaning of the ANS1028S error message and take the action suggested
therein, as this sounds like a possible bug to me: on the surface, I would
think that TSM should be able to handle this situation more gracefully.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.

"ADSM: Dist Stor Manager"  wrote on 2005-07-14
01:36:33:

> Hi all
>
> for the following question a user gave me an answer:
>
> Where a file server is that cranky and can't be fixed, I would take
> such file systems out of TSM scheduled backups and instead
> conditionally run them from a client OS script which performed tests
> on the viability of the served file system before unleashing a dsmc
> Incremental on it. I would not expect TSM client software to contend
> with unreliable file systems.
>
> Richard Sims
>
>
***
>
> I mean, yes, I could just exclude those file spaces if I would have
> knew in advance.
>
> My question is: there are occasionally such file systems (I don't
> know what developpers are dowing), however it is hard to predict.
>
> But still my question:
>
> How can I avoid that dsm simply stops further backups but instead
> how can I instruct dsmc not to backup such filespaces or how is it
> possible create some sort of "exclude" list. Nevertheles my question
> still remains: Why dsmc produces an internal program error? I was
> able to scan through all directories with a C or JAVA program. When
> there was a stale file handle my programs continued to scan the next
> file handle.
>
> Also is there any PTF? What is the statement of IBM?
>
> Thanks four your help
> Regards
> Werner Nussbaumer
>
>
***
>
***
>
***
>
> Original Message:
>
>
> > We have directories for which "ls" displays:
> >
> > [EMAIL PROTECTED]: /data/test:==>ls -al
> > ls: ./TST: EDC8134I Stale file handle.
> > ls: ./TST.B0.AHO: EDC8134I Stale file handle.
> > ...
> > ...
> >
> > When dsmc tries to backup this data, then the following error
> message is displayed:
> >
> >
>

> > [EMAIL PROTECTED]: /home/user:==>dsmc
> > IBM Tivoli Storage Manager
> > Command Line Backup/Archive Client Interface
> >   Client Version 5, Release 3, Level 0.0
> >   Client date/time: 12.07.2005 13:15:43
> > (c) Copyright by IBM Corporation and other(s) 1990, 2004. All
> Rights Reserved.
> >
> > Node Name: IMAGE1
> > Session established with server ADSM: MVS
> >   Server Version 5, Release 1, Level 9.0
> >   Server date/time: 12.07.2005 13:15:43  Last access: 12.07.2005
13:15:22
> >
> > tsm> incr /data/test/*
> >
> > Incremental backup of volume '/data/test/*'
> > ANS1999E Incremental processing of '/data/test/*' stopped.
> >
> >
> > Total number of objects inspected:1
> > Total number of objects backed up:0
> > Total number of objects updated:  0
> > Total number of objects rebound:  0
> > Total number of objects deleted:  0
> > Total number of objects expired:  0
> > Total number of objects failed:   0
> > Total number of bytes transferred:0  B
> > Data transfer time:0.00 sec
> > Network data transfer rate:0.00 KB/sec
> > Aggregate data transfer rate:  0.00 KB/sec
> > Objects compressed by:0%
> > Elapsed processing time:   00:00:03
> > ANS1028S An internal program error occurred.
> >
>

> > dsmerror.log:
> >
> > 12.07.2005 13:15:51 TransErrno: Unexpected error from GetFSInfo:
> statfs, errno =
> > 1134
> > 12.07.2005 13:15:53 TransErrno: Unexpected error from lstat, errno =
1134
> > 12.07.2005 13:15:53 PrivIncrFileSpace: Received rc=131 from
> fioGetDirEntries:  /
> > data  /test
> > 12.07.2005 13:15:53 ANS1999E Incremental processing of
> '/data/test/*' stopped.
> >
> > 12.07.2005 13:15:53 ANS1028S An internal program error occurred.
> >
>

> >
> > How it is possible to avoid that dsmc stops processing the backup
> when it tries to backup a stale file?>
> >
> > Thanks for any help,
> > regards
> > Werner Nussbaumer


Re: Long term data retention for retired clients

2005-07-14 Thread Richard Sims

On Jul 14, 2005, at 8:45 AM, Richard Rhodes wrote:


If there was one thing I really wish in all this was a comments
field.  The
only place we found to put comments about a node is in the contacts
field.
I wish there was another field where we could enter comments.
...


Richard -

With a "decommissioned" node, you could add 200 chars of annotation
in the node "URL" field, which accepts arbitrary text.

  Richard Sims


Re: Long term data retention for retired clients

2005-07-14 Thread Richard Rhodes
We have hundreds of retired servers, both from a server consolidation
project and general server rollover.  we decided we requred acces to the
backups and archives on the retired servers, so exports and backup sets
wouldn't work.  Also, by keeping the data in the normal pools we keep
redundancy (primary and copy pool) and DR issues covered.

We thought about moving the nodes into their own domain, but decided not to
because most domains use default management classes and we weren't sure how
to keep the policies straight in one domain holding retired servers from
many domains.  We thought about a separate retired domain for each
production domain, and rejected it.  Finally we decided to simply rename
the nodes.  All retired nodes are given a prefix of "zzrt_".  For example,
node "someserver"  becomes retired node "zzrt_someserver".  Normal policies
are allowed to expire inactive versions.  We put a comment in the contact
field for when the active versions can be deleted and the node removed.

This is anything but ideal . . . . but I think we found that this was true
for any method.  Many if not most of the sql queries we run against the tsm
db have logic to exclude any nodes that are like 'zzrt_%'.

I think a cleaner method would be a separate tsm server for just retired
nodes . . . . . but that has obvious drawbacks also.

If there was one thing I really wish in all this was a comments field.  The
only place we found to put comments about a node is in the contacts field.
I wish there was another field where we could enter comments.

I am interested in how others handle this also.

Rick





  John Naylor
  <[EMAIL PROTECTED]To:   ADSM-L@VM.MARIST.EDU
  HERN.CO.UK>   cc:
  Sent by: "ADSM: Dist Stor Subject:  Long term 
data retention for retired clients
  Manager"
  


  07/14/2005 05:27 AM
  Please respond to "ADSM:
  Dist Stor Manager"






Hi out there,
Just wondering what the consensus is on the best way to retain TSM client
data that has to be kept for many years (legal requirement) after the
client box is retired.
I have consideredf various approaches
1) Export
2) Backup set
3) Create a new domain for retired clients which have the long term
retention requirement

I see export and backup sets as reducing database overhead, but being less
easy to track and rather unfriendly if you just need to get a subset of
the data back
The new domain would retain great visibilty for the data and allow easy
access to subsets of the data, but you would stiil have the database
overhead.
Does the choice just depend, on how often in reality you will need to get
the  data back ?
Thoughts appreciated
John







**

The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern Energy
Group.

It is intended solely for the addressees. Access to this E-Mail by anyone
else is unauthorised. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be
taken in reliance on it, is prohibited and may be unlawful. Any
unauthorised recipient should advise the sender immediately of the error in
transmission. Unless specifically stated otherwise, this email (or any
attachments to it) is not an offer capable of acceptance or acceptance of
an offer and it does not form part of a binding contractual agreement.

Scottish Hydro-Electric, Southern Electric, SWALEC, S+S and SSE Power
Distribution are trading names of the Scottish and Southern Energy Group.

**




-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.


Re: Long term data retention for retired clients

2005-07-14 Thread Richard Sims

On Jul 14, 2005, at 5:27 AM, John Naylor wrote:


I have considered various approaches
1) Export
2) Backup set
3) Create a new domain for retired clients which have the long term
retention requirement



Hi, John -

Another possibility for your list is TSM for Data Retention.
This would be more for a site where long-term, assured retention is
a big, ongoing thing as it's a non-trivial implementation.

   Richard Sims


Re: Backup

2005-07-14 Thread Richard Sims

On Jul 14, 2005, at 2:01 AM, Akash Jain wrote:



I am in very strange situation where my Oracle backup sometimes
completes in
1 hour and sometimes it took only 16 minutes for same data.



You may have encountered an illustration of the value of SAN versus
shared LAN, where all manner and volume of traffic may appear on
highly accessible LANs, slowing throughput for your transfer in
amongst all the other data on the wire.

   Richard Sims


Re: Long term data retention for retired clients

2005-07-14 Thread John Naylor
Further to my query, I have not mentioned archive as this is not a
function that we regularly use, but would this be a good candidate for
retiring clients,
and can you archive a whole drive  ie   D:
thanks,
John



John Naylor/HAV/SSE
14/07/2005 10:27

To
"ADSM: Dist Stor Manager" 
cc

Subject
Long term data retention for retired clients





Hi out there,
Just wondering what the consensus is on the best way to retain TSM client
data that has to be kept for many years (legal requirement) after the
client box is retired.
I have consideredf various approaches
1) Export
2) Backup set
3) Create a new domain for retired clients which have the long term
retention requirement

I see export and backup sets as reducing database overhead, but being less
easy to track and rather unfriendly if you just need to get a subset of
the data back
The new domain would retain great visibilty for the data and allow easy
access to subsets of the data, but you would stiil have the database
overhead.
Does the choice just depend, on how often in reality you will need to get
the  data back ?
Thoughts appreciated
John








**

The information in this E-Mail is confidential and may be legally privileged. 
It may not represent the views of Scottish and Southern Energy Group.

It is intended solely for the addressees. Access to this E-Mail by anyone else 
is unauthorised. If you are not the intended recipient, any disclosure, 
copying, distribution or any action taken or omitted to be taken in reliance on 
it, is prohibited and may be unlawful. Any unauthorised recipient should advise 
the sender immediately of the error in transmission. Unless specifically stated 
otherwise, this email (or any attachments to it) is not an offer capable of 
acceptance or acceptance of an offer and it does not form part of a binding 
contractual agreement.

Scottish Hydro-Electric, Southern Electric, SWALEC, S+S and SSE Power 
Distribution are trading names of the Scottish and Southern Energy Group.

**


Re: Long term data retention for retired clients

2005-07-14 Thread Iain Barnetson
I'd be interested in the same info as regards NetApp client data, ie:
NDMP dump backups.
Iain

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
John Naylor
Sent: 14 July 2005 10:28
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Long term data retention for retired clients

Hi out there,
Just wondering what the consensus is on the best way to retain TSM
client data that has to be kept for many years (legal requirement) after
the client box is retired.
I have consideredf various approaches
1) Export
2) Backup set
3) Create a new domain for retired clients which have the long term
retention requirement

I see export and backup sets as reducing database overhead, but being
less easy to track and rather unfriendly if you just need to get a
subset of the data back The new domain would retain great visibilty for
the data and allow easy access to subsets of the data, but you would
stiil have the database overhead.
Does the choice just depend, on how often in reality you will need to
get the  data back ?
Thoughts appreciated
John







**

The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy Group.

It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be
taken in reliance on it, is prohibited and may be unlawful. Any
unauthorised recipient should advise the sender immediately of the error
in transmission. Unless specifically stated otherwise, this email (or
any attachments to it) is not an offer capable of acceptance or
acceptance of an offer and it does not form part of a binding
contractual agreement.

Scottish Hydro-Electric, Southern Electric, SWALEC, S+S and SSE Power
Distribution are trading names of the Scottish and Southern Energy
Group.

**
--
This e-mail, including any attached files, may contain confidential and 
privileged information for the sole use of the intended recipient.  Any review, 
use, distribution, or disclosure by others is strictly prohibited.  If you are 
not the intended recipient (or authorized to receive information for the 
intended recipient), please contact the sender by reply e-mail and delete all 
copies of this message.


Long term data retention for retired clients

2005-07-14 Thread John Naylor
Hi out there,
Just wondering what the consensus is on the best way to retain TSM client
data that has to be kept for many years (legal requirement) after the
client box is retired.
I have consideredf various approaches
1) Export
2) Backup set
3) Create a new domain for retired clients which have the long term
retention requirement

I see export and backup sets as reducing database overhead, but being less
easy to track and rather unfriendly if you just need to get a subset of
the data back
The new domain would retain great visibilty for the data and allow easy
access to subsets of the data, but you would stiil have the database
overhead.
Does the choice just depend, on how often in reality you will need to get
the  data back ?
Thoughts appreciated
John







**

The information in this E-Mail is confidential and may be legally privileged. 
It may not represent the views of Scottish and Southern Energy Group.

It is intended solely for the addressees. Access to this E-Mail by anyone else 
is unauthorised. If you are not the intended recipient, any disclosure, 
copying, distribution or any action taken or omitted to be taken in reliance on 
it, is prohibited and may be unlawful. Any unauthorised recipient should advise 
the sender immediately of the error in transmission. Unless specifically stated 
otherwise, this email (or any attachments to it) is not an offer capable of 
acceptance or acceptance of an offer and it does not form part of a binding 
contractual agreement.

Scottish Hydro-Electric, Southern Electric, SWALEC, S+S and SSE Power 
Distribution are trading names of the Scottish and Southern Energy Group.

**


TSM Client 5.3 Error ANS1028S when trying to backup directories with "EDC8134I Stale file handle"

2005-07-14 Thread Werner Nussbaumer
Hi all

for the following question a user gave me an answer:

Where a file server is that cranky and can't be fixed, I would take
such file systems out of TSM scheduled backups and instead
conditionally run them from a client OS script which performed tests
on the viability of the served file system before unleashing a dsmc
Incremental on it. I would not expect TSM client software to contend
with unreliable file systems.

Richard Sims

***

I mean, yes, I could just exclude those file spaces if I would have knew in 
advance. 

My question is: there are occasionally such file systems (I don't know what 
developpers are dowing), however it is hard to predict. 

But still my question:

How can I avoid that dsm simply stops further backups but instead how can I 
instruct dsmc not to backup such filespaces or how is it possible create some 
sort of "exclude" list. Nevertheles my question still remains: Why dsmc 
produces an internal program error? I was able to scan through all directories 
with a C or JAVA program. When there was a stale file handle my programs 
continued to scan the next file handle.

Also is there any PTF? What is the statement of IBM?

Thanks four your help
Regards
Werner Nussbaumer

***
***
***

Original Message:


> We have directories for which "ls" displays:
> 
> [EMAIL PROTECTED]: /data/test:==>ls -al
> ls: ./TST: EDC8134I Stale file handle.
> ls: ./TST.B0.AHO: EDC8134I Stale file handle.
> ...
> ...
> 
> When dsmc tries to backup this data, then the following error message is 
> displayed:
> 
> 
> [EMAIL PROTECTED]: /home/user:==>dsmc
> IBM Tivoli Storage Manager
> Command Line Backup/Archive Client Interface
>   Client Version 5, Release 3, Level 0.0
>   Client date/time: 12.07.2005 13:15:43
> (c) Copyright by IBM Corporation and other(s) 1990, 2004. All Rights Reserved.
> 
> Node Name: IMAGE1
> Session established with server ADSM: MVS
>   Server Version 5, Release 1, Level 9.0
>   Server date/time: 12.07.2005 13:15:43  Last access: 12.07.2005 13:15:22
> 
> tsm> incr /data/test/*
> 
> Incremental backup of volume '/data/test/*'
> ANS1999E Incremental processing of '/data/test/*' stopped.
> 
> 
> Total number of objects inspected:1
> Total number of objects backed up:0
> Total number of objects updated:  0
> Total number of objects rebound:  0
> Total number of objects deleted:  0
> Total number of objects expired:  0
> Total number of objects failed:   0
> Total number of bytes transferred:0  B
> Data transfer time:0.00 sec
> Network data transfer rate:0.00 KB/sec
> Aggregate data transfer rate:  0.00 KB/sec
> Objects compressed by:0%
> Elapsed processing time:   00:00:03
> ANS1028S An internal program error occurred.
> 
> dsmerror.log:
> 
> 12.07.2005 13:15:51 TransErrno: Unexpected error from GetFSInfo:statfs, errno 
> =
> 1134
> 12.07.2005 13:15:53 TransErrno: Unexpected error from lstat, errno = 1134
> 12.07.2005 13:15:53 PrivIncrFileSpace: Received rc=131 from fioGetDirEntries: 
>  /
> data  /test
> 12.07.2005 13:15:53 ANS1999E Incremental processing of '/data/test/*' stopped.
> 
> 12.07.2005 13:15:53 ANS1028S An internal program error occurred.
> 
> 
> How it is possible to avoid that dsmc stops processing the backup when it 
> tries to backup a stale file?> 
> 
> Thanks for any help,
> regards
> Werner Nussbaumer


Re: Backup

2005-07-14 Thread Henrik Wahlstedt
Hi,

If speed/duplex settings are OK for NIC´s and switches and you can FTP some 
large files from/to Oracle and TSM servers with a reasonable speed. Then you 
have to monitor CPU, Disk I/O etc on both servers and switches...  To find a 
pattern and track down the problem.

//Henrik



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Akash Jain
Sent: den 14 juli 2005 09:13
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backup

Yeah they are also configured at same speed.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of Scott, Mark 
William
Sent: Thursday, 14 July 2005 11:59 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup

Have you checked the switches though?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Akash Jain
Sent: Thursday, 14 July 2005 2:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backup

Hi mark,

Our all network interfaces are configured on 100 MBPS only.

Media Speed Selected: 100 Mbps Full Duplex Media Speed Running: 100 Mbps Full 
Duplex

Regards
Akash

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of Scott, Mark 
William
Sent: Thursday, 14 July 2005 11:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup

Hi Akash
We have found if you are using 100mb network for example you 
have to ensure that all interfaces are set to the same speed etc eg:

Host 100mb/Full switch port 100mb/Full and backup server 100mb/Full

Hope this helps
Cheers Mark


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Akash Jain
Sent: Thursday, 14 July 2005 2:01 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Backup

Hi all,

I am in very strange situation where my Oracle backup sometimes completes in
1 hour and sometimes it took only 16 minutes for same data.

On scrutinising the logs, it has been observed that it's the matter of network 
performance. Sometimes I am getting a speed of 1206kb/ps (data took
1 hour to backed up) and at 4506kb/ps completes in 16 minutes.

I am assuming this is the network issue and to be checked by networking guys.

Does anyone have faces this problem or any ideas about the same?

AIX - 4.3.3.11
Data - Oracle Files
Backup Software : TSM


Regards
Akash Jain


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Re: Backup

2005-07-14 Thread Akash Jain
Yeah they are also configured at same speed.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Scott, Mark William
Sent: Thursday, 14 July 2005 11:59 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup

Have you checked the switches though?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Akash Jain
Sent: Thursday, 14 July 2005 2:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backup

Hi mark,

Our all network interfaces are configured on 100 MBPS only.

Media Speed Selected: 100 Mbps Full Duplex
Media Speed Running: 100 Mbps Full Duplex

Regards
Akash

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Scott, Mark William
Sent: Thursday, 14 July 2005 11:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup

Hi Akash
We have found if you are using 100mb network for example
you have to ensure that all interfaces are set to the same speed etc eg:

Host 100mb/Full switch port 100mb/Full and backup server 100mb/Full

Hope this helps
Cheers Mark


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Akash Jain
Sent: Thursday, 14 July 2005 2:01 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Backup

Hi all,

I am in very strange situation where my Oracle backup sometimes
completes in
1 hour and sometimes it took only 16 minutes for same data.

On scrutinising the logs, it has been observed that it's the matter of
network performance. Sometimes I am getting a speed of 1206kb/ps (data
took
1 hour to backed up) and at 4506kb/ps completes in 16 minutes.

I am assuming this is the network issue and to be checked by networking
guys.

Does anyone have faces this problem or any ideas about the same?

AIX - 4.3.3.11
Data - Oracle Files
Backup Software : TSM


Regards
Akash Jain