TSM Server 5.1.6.5 on AIX 4.3
TSM Client 5.1.6.0 on HPUX 11i

Hi,

we are doing image backup of raw devices every day directly to tapes.
The copygroup settings, to which are bound these backups are:

           Policy Domain Name: HPUX-MTEL
              Policy Set Name: HPUX-MTEL
              Mgmt Class Name: LEAPT
              Copy Group Name: STANDARD
              Copy Group Type: Backup
         Versions Data Exists: 5
        Versions Data Deleted: 1
        Retain Extra Versions: 15
          Retain Only Version: 30
                    Copy Mode: Absolute
           Copy Serialization: Shared Static
               Copy Frequency: 0
             Copy Destination: LEAPT-LTO
ast Update by (administrator): MILIEVA
        Last Update Date/Time: 06/04/2003 10:56:51

Every backup goes in parallel to 2 tapes, which we make readonly after that by some 
reasons, so normally when expiration occurs the oldest 2 tapes goes to scratch.
But one backup didn't completed successfully, so only half of the raw devices were 
backuped.
As I knew that there were no changes on the client, I deleted the volumes with the 
broken backup  and started a new one. 
After that day, we have a 2 tapes, that have some data expired on them:

T00005                    LEAPT-LTO    L700-LTO1   290,071.6   73.8  Filling
T00006                    LEAPT-LTO    L700-LTO1   288,035.2  100.0  Filling
T00010                    LEAPT-LTO    L700-LTO1   288,051.2   29.2  Filling
T00019                    LEAPT-LTO    L700-LTO1   290,087.6  100.0  Filling
T00027                    LEAPT-LTO    L700-LTO1   290,087.6  100.0  Filling
T00030                    LEAPT-LTO    L700-LTO1   287,219.1  100.0  Filling
T00035                    LEAPT-LTO    L700-LTO1   290,903.7  100.0  Filling
T00079                    LEAPT-LTO    L700-LTO1   288,035.2  100.0  Filling
T00092                    LEAPT-LTO    L700-LTO1   288,151.4  100.0  Filling
T00093                    LEAPT-LTO    L700-LTO1   289,971.4  100.0  Filling

A select on contents table for that node shows me that there are 4 versions  for the 
raw devices belonging to the broken backup, and 5 for the others.
But there are 4 successfull backups for that node since then. I expected on the next 
backup to have 5 versions for all raw devices, but it continues keeping different 
versions of different data bound to one management class!
And I'm sure that all data from that node go exactly to that management class.

What do you think - is this a normal behaviour??


Maria Ilieva

Reply via email to