Re: archive failure (continued)

2003-04-02 Thread Stapleton, Mark
From: Steve Harris [mailto:[EMAIL PROTECTED] 
 I have two objections to backupsets.  The first is that API 
 data is not covered.  The second is that you are not supposed 
 to make new backups for a node whilst the backupset is being 
 created - this could cause horrendous scheduling difficulties 
 in my environment.
 
 Is this second restriction a real issue or is it not a 
 concern in practice?

I couldn't tell you if the second restriction is a real issue; I'm not
familiar with your environment or your scheduling difficulties.

--
Mark Stapleton ([EMAIL PROTECTED]) 


Re: archive failure (continued)

2003-04-01 Thread J M
On Mon, 31 Mar 2003 07:16 pm, it was written:
You could try running monthly incrementals under a
different nodename (ie: create another dsm.opt file
eg: dsmmthly.opt) to your TSM (same or even dedicated)
server.
BTW, running monthly incrementals will not facilitate your long-term
storage nearly as nicely as an archive will.
This is very true for our environment, since we will need to keep some data,
literally forever, and some data 1-10 years. So the vital records
retention/archive is definitely a requirement for us- and having accurate
snapshots will be key.
The original poster commented that he could not run archives and backups
off the same client; I'd be interested in seeing what is going on with
his TSM environment.
The problem is that some archive jobs are taking a very long time to finish,
and end up overlapping with daily backup processing jobs. Part of our issue
is likely DB I/O Disk Contention (70 GB TSM DB AIX on 3 36 GB SCSI disks!).
From: Steven Pemberton [mailto:[EMAIL PROTECTED]
Have you considered creating a monthly BackupSet tape for
each of your file servers?

BackupSets have several advantages over a full archive
for monthly retention:

1/ The file server doesn't need to send any additional data
for the monthly retention. There is no need for a
special monthly backup. The backupset is
created from existing incremental backup data already in the
TSM server.

2/ The BackupSet contents are indexed on the backupset tapes,
and not in the TSM database. Therefore your database doesn't
need to grow as you retain the monthly backupsets.
As big a fan of backupsets as I am, I feel the need to point out the
disadvantage of backupsets: you can't browse through them if you don't
the name of a desired file or its directory location. You can run Q
BACKUPSETCONTENTS, but then you'll have to grep through a *very* long
output.
In our environment, a backupset would be ideal to keep our TSM DB from
growing constantly due to archives, except for the fact we are limited in
the number of tape drives available to process the backupset data migration
tape-tape. Any ideas to circumvent this physical limitation would be much
appreciated-
Many Thanks-- John

_
The new MSN 8: smart spam protection and 2 months FREE*
http://join.msn.com/?page=features/junkmail


Re: archive failure (continued)

2003-04-01 Thread Steven Pemberton
 As big a fan of backupsets as I am, I feel the need to point out the
 disadvantage of backupsets: you can't browse through them if you don't
 the name of a desired file or its directory location. You can run Q
 BACKUPSETCONTENTS, but then you'll have to grep through a *very* long
 output.

 In our environment, a backupset would be ideal to keep our TSM DB from
 growing constantly due to archives, except for the fact we are limited in
 the number of tape drives available to process the backupset data migration
 tape-tape. Any ideas to circumvent this physical limitation would be much
 appreciated-

That's easy - buy more tape drives. :)

(A)
Actually, I'm almost serious about that; if you need more resources, then you 
need more resources. It's a simple problem of money. :)

But, back to reality...

(B)
The BackupSet needs to copy only the ACTIVE version of the files in the 
filespaces you are planning to retain for an extended period. By using disk 
storage pools (perhaps specific ones dedicated to the data to be archived), 
with CACHE and/or MIGDELAY, you could try to retain more of your ACTIVE 
backup versions on disk?

Or...

(C)
Another idea might be to create a loopback TSM server definition. This would 
allow you to generate the backupsets from local storage pools, to the 
loopback TSM server definition, where they would initially be saved to a 
disk storage pool. They could then later be migrated onto a separate tape 
storage pool. This way, with enough disk, you would only need a single tape 
drive during the backupset generation.

Using a loopback TSM server to store backupsets has some pro's and con's.

The benifits are potentially using fewer tape drives, and also consolidating 
multiple backupsets onto a single tape. Normally each backupset requires a 
unique tape, but when using a loopback TSM server each backupset uses a 
unique virtual volume, multiples of which can be stored on each physical 
tape.

The downside of storing backupsets on virual volumes are that you lose the 
ability to restore the backupset in isolation from the TSM server. The index 
of the physical tape used to store the backupset virtual volumes is kept in 
the TSM database, so you will need access to the TSM server to retrieve the 
backupset virtual volumes.

Another important downside might be whether Tivoli would support this 
innovative configuration. :)

Or, finally...

(D)
Try to plan your normal policy to cater for 95% of your data restoration 
requirements. Restoring files from monthly archives should be the 
exception. Try to limit the data to be archived (ie. included in a 
BackupSet) to only the critical data. Perhaps separate important business 
data onto it's own filespace and only include that filespace in the 
backupset?

To assist in the recovery of files from a backupset you might consider saving 
a text file listing the contents of each backupset at creation time. You 
could use either q backupsetcontents or an SQL query to produce this list.

Another issue with backupsets is that the GUI cannot restore an individual 
file, only the entire backupset. You need to use the DSMC command line, and 
specify the filename, to restore an individual file from a backupset.

Though, my favorite option is still  (A) Buy more tape drives. However, (D) 
sounds good too. :)

Steven P.

-- 
Steven Pemberton  Mobile: +61 4 1833 5136
Innovative Business Knowledge   Office: +61 3 9820 5811
Senior Enterprise Management ConsultantFax: +61 3 9820 9907


Re: archive failure (continued)

2003-04-01 Thread Steve Harris
Mark,

I have two objections to backupsets.  The first is that API data is not covered.  The 
second is that you are not supposed to make new backups for a node whilst the 
backupset is being created - this could cause horrendous scheduling difficulties in my 
environment.

Is this second restriction a real issue or is it not a concern in practice?

Regards

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia

 [EMAIL PROTECTED] 01/04/2003 14:21:37 
On Mon, 31 Mar 2003 07:16 pm, it was written:
You could try running monthly incrementals under a 
different nodename (ie: create another dsm.opt file 
eg: dsmmthly.opt) to your TSM (same or even dedicated) 
server.

BTW, running monthly incrementals will not facilitate your long-term
storage nearly as nicely as an archive will. 

The original poster commented that he could not run archives and backups
off the same client; I'd be interested in seeing what is going on with
his TSM environment.

From: Steven Pemberton [mailto:[EMAIL PROTECTED] 
Have you considered creating a monthly BackupSet tape for 
each of your file servers?

BackupSets have several advantages over a full archive 
for monthly retention:
 
1/ The file server doesn't need to send any additional data 
for the monthly retention. There is no need for a 
special monthly backup. The backupset is 
created from existing incremental backup data already in the 
TSM server.
 
2/ The BackupSet contents are indexed on the backupset tapes, 
and not in the TSM database. Therefore your database doesn't 
need to grow as you retain the monthly backupsets.

As big a fan of backupsets as I am, I feel the need to point out the
disadvantage of backupsets: you can't browse through them if you don't
the name of a desired file or its directory location. You can run Q
BACKUPSETCONTENTS, but then you'll have to grep through a *very* long
output.

--
Mark Stapleton ([EMAIL PROTECTED]) 



**
This e-mail, including any attachments sent with it, is confidential 
and for the sole use of the intended recipient(s). This confidentiality 
is not waived or lost if you receive it and you are not the intended 
recipient(s), or if it is transmitted/ received in error.  

Any unauthorised use, alteration, disclosure, distribution or review 
of this e-mail is prohibited.  It may be subject to a statutory duty of 
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this 
e-mail in error, you are asked to immediately notify the sender by 
telephone or by return e-mail.  You should also delete this e-mail 
message and destroy any hard copies produced.
**


Re: archive failure (continued)

2003-03-31 Thread Shamim, Rizwan (IDS EMEA)
You could try running monthly incrementals under a different nodename (ie:
create another dsm.opt file eg: dsmmthly.opt) to your TSM (same or even
dedicated) server.

We have the financial regulatory requirement also and regular archives are
not the way to go.

   
Rizwan Shamim
Central Backup Service (CBS)
Merrill Lynch Europe PLC
Tel: 020 7995 1176
Mobile: 0787 625 8246
mailto:[EMAIL PROTECTED]

For very important information relating to this e-mail please click on
this link: http://www.ml.com/legal_info.htm


 -Original Message-
 From: J M [SMTP:[EMAIL PROTECTED]
 Sent: Monday, March 31, 2003 1:09 AM
 To:   [EMAIL PROTECTED]
 Subject:  Re: archive failure (continued)

 Hey TSMers-

 We have archive jobs that repeatedly fail or run for long periods of time
 on
 our large file servers (200 GB, NT, TSM 4.2 Client, TSM 4.2 Server). The
 jobs will not run in parallel with daily backups- so we are looking for a
 better solution-

 We are considering moving from 100 Mb to Gigabit, and possibly creating a
 separate schedule/nodename for archive processing. Also- since archive
 retentions are going to dramatically increase due to regulatory rules, we
 are considering implementing a dedicated Archive processing TSM server-
 mainly to avoid bloating the TSM databases which currently handle
 backup/recovery.

 All thoughts or tips are welcome on these ideas--

 Many Thanks,

 John






 From: Andrew Raibeck [EMAIL PROTECTED]
 Reply-To: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Re: archive failure
 Date: Tue, 11 Feb 2003 09:57:41 -0700
 
 If you are referring to the ARCHIVE_DIR and ARCHIVE_INCLUDES variables,
 you won't find anything in the TSM doc about these. These are arbitrary
 environment variables created by your user.
 
 After the environment variables were created, was the scheduler restarted
 so that it would pick them up? Also, have you confirmed that these are
 indeed system (and not user) variables? If they are user variables, then
 unless the scheduler is using the same account, the scheduler's
 environment won't have these defined.
 
 If none of the above addresses the issue, then you can modify the script
 to include some diagnostics. My suggestion:
 
 @echo off
 set diagfile=c:\diagfile.txt
 
 REM *** Confirm environment variables
 echo *** ENVIRONMENT VARIABLES ***  %diagfile%
 set  %diagfile%
 
 REM *** Confirm current directory
 echo *** CURRENT DIRECTORY ***  %diagfile%
 cd  %diagfile%
 
 cd /d %ARCHIVE_DIR%
 
 REM *** Confirm current directory
 echo *** TSM DIRECTORY ***  %diagfile%
 cd  %diagfile%
 
 REM *** Run the archive
 echo *** RUNNING ARCHIVE COMMAND NOW ***  %diagfile%
 dsmc archive %ARCHIVE_INCLUDES% -subdir=yes -archmc=month-mgt 
 %diagfile% 21
 set tsmrc=%errorlevel%
 echo *** ARCHIVE COMPLETE RC = %tsmrc% ***  %diagfile%
 
 Let the scheduler run the command, then check c:\diagfile.txt to see what
 it shows.
 
 Regards,
 
 Andy
 
 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
 Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)
 
 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.
 
 
 
 Someone here has gone off and created their own cmd file that TSM
 launches
 to do an archive. I can't seem to find anything in the help file on these
 variables. Maybe someone more familiar with variables can please tell me
 if
 this will work. From what I can tell looking at the return codes it looks
 like it is saying it found nothing to backup and reports a failure.
 
 I'm told if they run this cmd file manually from the server it works. Why
 then would TSM have a problem with it when all it's doing is calling the
 same file?
 
 
 MonthArchive.cmd
 chdir /D %ARCHIVE_DIR%
 dsmc.exe archive -subdir=yes -archmc=month-mgt %ARCHIVE_INCLUDES%
 
 System Variables
 ARCHIVE_DIR=E:\Tivoli\TSM\baclient
 ARCHIVE_INCLUDES=C:\SAIC\*.* E:\NOTES\*.*
 
 
 dsmerror.log excerpt
 02/01/2003 17:59:14 ANS1079E No file specification entered
 02/01/2003 17:59:15 ANS1909E The scheduled command failed.
 02/01/2003 17:59:15 ANS1512E Scheduled event 'NOTES-MONTH' failed.
 Return
 code = 12
 
 
 dsmsched.log excerpt
 01/31/2003 19:26:20 --- SCHEDULEREC QUERY BEGIN
 01/31/2003 19:26:20 --- SCHEDULEREC QUERY END
 01/31/2003 19:26:20 Next operation scheduled:
 01/31/2003 19:26:20
 
 01/31/2003 19:26:20 Schedule Name: NOTES-MONTH
 01/31/2003 19:26:20 Action:Command
 01/31/2003 19:26:20 Objects:   c:\commands\MonthArchive.cmd
 01/31/2003 19:26:20 Options:
 01/31/2003 19:26:20 Server Window Start:   18:00:00 on 02/01/2003
 01/31/2003 19:26:20

Re: archive failure (continued)

2003-03-31 Thread Stapleton, Mark
On Mon, 31 Mar 2003 07:16 pm, it was written:
You could try running monthly incrementals under a 
different nodename (ie: create another dsm.opt file 
eg: dsmmthly.opt) to your TSM (same or even dedicated) 
server.

BTW, running monthly incrementals will not facilitate your long-term
storage nearly as nicely as an archive will. 

The original poster commented that he could not run archives and backups
off the same client; I'd be interested in seeing what is going on with
his TSM environment.

From: Steven Pemberton [mailto:[EMAIL PROTECTED] 
Have you considered creating a monthly BackupSet tape for 
each of your file servers?

BackupSets have several advantages over a full archive 
for monthly retention:
 
1/ The file server doesn't need to send any additional data 
for the monthly retention. There is no need for a 
special monthly backup. The backupset is 
created from existing incremental backup data already in the 
TSM server.
 
2/ The BackupSet contents are indexed on the backupset tapes, 
and not in the TSM database. Therefore your database doesn't 
need to grow as you retain the monthly backupsets.

As big a fan of backupsets as I am, I feel the need to point out the
disadvantage of backupsets: you can't browse through them if you don't
the name of a desired file or its directory location. You can run Q
BACKUPSETCONTENTS, but then you'll have to grep through a *very* long
output.

--
Mark Stapleton ([EMAIL PROTECTED]) 


Re: archive failure (continued)

2003-03-30 Thread J M
Hey TSMers-

We have archive jobs that repeatedly fail or run for long periods of time on
our large file servers (200 GB, NT, TSM 4.2 Client, TSM 4.2 Server). The
jobs will not run in parallel with daily backups- so we are looking for a
better solution-
We are considering moving from 100 Mb to Gigabit, and possibly creating a
separate schedule/nodename for archive processing. Also- since archive
retentions are going to dramatically increase due to regulatory rules, we
are considering implementing a dedicated Archive processing TSM server-
mainly to avoid bloating the TSM databases which currently handle
backup/recovery.
All thoughts or tips are welcome on these ideas--

Many Thanks,

John






From: Andrew Raibeck [EMAIL PROTECTED]
Reply-To: ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: archive failure
Date: Tue, 11 Feb 2003 09:57:41 -0700
If you are referring to the ARCHIVE_DIR and ARCHIVE_INCLUDES variables,
you won't find anything in the TSM doc about these. These are arbitrary
environment variables created by your user.
After the environment variables were created, was the scheduler restarted
so that it would pick them up? Also, have you confirmed that these are
indeed system (and not user) variables? If they are user variables, then
unless the scheduler is using the same account, the scheduler's
environment won't have these defined.
If none of the above addresses the issue, then you can modify the script
to include some diagnostics. My suggestion:
   @echo off
   set diagfile=c:\diagfile.txt
   REM *** Confirm environment variables
   echo *** ENVIRONMENT VARIABLES ***  %diagfile%
   set  %diagfile%
   REM *** Confirm current directory
   echo *** CURRENT DIRECTORY ***  %diagfile%
   cd  %diagfile%
   cd /d %ARCHIVE_DIR%

   REM *** Confirm current directory
   echo *** TSM DIRECTORY ***  %diagfile%
   cd  %diagfile%
   REM *** Run the archive
   echo *** RUNNING ARCHIVE COMMAND NOW ***  %diagfile%
   dsmc archive %ARCHIVE_INCLUDES% -subdir=yes -archmc=month-mgt 
%diagfile% 21
   set tsmrc=%errorlevel%
   echo *** ARCHIVE COMPLETE RC = %tsmrc% ***  %diagfile%
Let the scheduler run the command, then check c:\diagfile.txt to see what
it shows.
Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)
The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.


Someone here has gone off and created their own cmd file that TSM launches
to do an archive. I can't seem to find anything in the help file on these
variables. Maybe someone more familiar with variables can please tell me
if
this will work. From what I can tell looking at the return codes it looks
like it is saying it found nothing to backup and reports a failure.
I'm told if they run this cmd file manually from the server it works. Why
then would TSM have a problem with it when all it's doing is calling the
same file?
MonthArchive.cmd
chdir /D %ARCHIVE_DIR%
dsmc.exe archive -subdir=yes -archmc=month-mgt %ARCHIVE_INCLUDES%
System Variables
ARCHIVE_DIR=E:\Tivoli\TSM\baclient
ARCHIVE_INCLUDES=C:\SAIC\*.* E:\NOTES\*.*
dsmerror.log excerpt
02/01/2003 17:59:14 ANS1079E No file specification entered
02/01/2003 17:59:15 ANS1909E The scheduled command failed.
02/01/2003 17:59:15 ANS1512E Scheduled event 'NOTES-MONTH' failed.  Return
code = 12
dsmsched.log excerpt
01/31/2003 19:26:20 --- SCHEDULEREC QUERY BEGIN
01/31/2003 19:26:20 --- SCHEDULEREC QUERY END
01/31/2003 19:26:20 Next operation scheduled:
01/31/2003 19:26:20

01/31/2003 19:26:20 Schedule Name: NOTES-MONTH
01/31/2003 19:26:20 Action:Command
01/31/2003 19:26:20 Objects:   c:\commands\MonthArchive.cmd
01/31/2003 19:26:20 Options:
01/31/2003 19:26:20 Server Window Start:   18:00:00 on 02/01/2003
01/31/2003 19:26:20

01/31/2003 19:26:20 Waiting to be contacted by the server.
02/01/2003 17:59:12 TCP/IP accepted connection from server.
02/01/2003 17:59:12 Querying server for next scheduled event.
02/01/2003 17:59:12 Node Name: CP-ITS-NOTESMTA
02/01/2003 17:59:12 Session established with server ADSM: AIX-RS/6000
02/01/2003 17:59:12   Server Version 5, Release 1, Level 5.2
02/01/2003 17:59:12   Server date/time: 02/01/2003 18:00:24  Last access:
01/31/2003 19:27:32
02/01/2003 17:59:12 --- SCHEDULEREC QUERY BEGIN
02/01/2003 17:59:12 --- SCHEDULEREC QUERY END
02/01/2003 17:59:12 Next operation scheduled:
02/01/2003 17:59:12

02/01/2003 17:59:12 Schedule Name: NOTES-MONTH
02/01/2003 17:59:12 Action:Command
02/01/2003 17:59:12 Objects:   c:\commands\MonthArchive.cmd
02/01/2003 17:59:12 Options:
02/01/2003