Hi Uwe,

Yes I defined using worm yes because these new tapes are worm tapes

tsm: ARDTSM1>q devcl ULTRIUM5W f=d

             Device Class Name: ULTRIUM5W
        Device Access Strategy: Sequential
            Storage Pool Count: 1
                   Device Type: LTO
                        Format: DRIVE
         Est/Max Capacity (MB): 1,536,000.0
                   Mount Limit: DRIVES
              Mount Wait (min): 5
         Mount Retention (min): 1
                  Label Prefix: ADSM
                       Library: LIBIBM3500
                     Directory:
                   Server Name:
                  Retry Period:
                Retry Interval:
                        Shared:
            High-level Address:
              Minimum Capacity:
                          WORM: Yes
              Drive Encryption: Off
               Scaled Capacity:
       Primary Allocation (MB):
     Secondary Allocation (MB):
                   Compression:
                     Retention:
                    Protection:
               Expiration Date:
                          Unit:
      Logical Block Protection: No
Last Update by (administrator): ADMIN
         Last Update Date/Time: 03/23/20   14:33:03


tsm: ARDTSM1>

On 3/23/2020 3:59 PM, Uwe Schreiber wrote:
Hello Lucian,

can you please provide the output of "q devcl ULTRIUM5W f=d"?
Did you define using Worm-Tape Format and no Worm-Tapes are available as 
Scratch?

Regards, Uwe

-----Ursprüngliche Nachricht-----
Von: ADSM: Dist Stor Manager <ADSM-L@VM.MARIST.EDU> Im Auftrag von Lucian Vlaicu
Gesendet: Montag, 23. März 2020 14:28
An: ADSM-L@VM.MARIST.EDU
Betreff: [ADSM-L] Scratch tapes issue

Hi all,

I defined a new stg pool on an existing tivoli server like

tsm: ARDTSM1>q stgpool STDHRDISKW f=d

                      Storage Pool Name: STDHRDISKW
                      Storage Pool Type: Primary
                      Device Class Name: DISK
                     Estimated Capacity: 438 G
                     Space Trigger Util: 0.0
                               Pct Util: 0.0
                               Pct Migr: 0.0
                            Pct Logical: 100.0
                           High Mig Pct: 80
                            Low Mig Pct: 20
                        Migration Delay: 0
                     Migration Continue: Yes
                    Migration Processes: 1
                  Reclamation Processes:
                      Next Storage Pool: STDHRLTO5W
                   Reclaim Storage Pool:
                 Maximum Size Threshold: 32 G
                                 Access: Read/Write
                            Description:
                      Overflow Location:
                  Cache Migrated Files?: Yes
                             Collocate?:
                  Reclamation Threshold:
              Offsite Reclamation Limit:
        Maximum Scratch Volumes Allowed:
         Number of Scratch Volumes Used:
          Delay Period for Volume Reuse:
                 Migration in Progress?: No
                   Amount Migrated (MB): 0.00
       Elapsed Migration Time (seconds): 0
               Reclamation in Progress?:
         Last Update by (administrator): ADMIN
                  Last Update Date/Time: 03/16/20   23:46:18
               Storage Pool Data Format: Native
                   Copy Storage Pool(s):
                    Active Data Pool(s):
                Continue Copy on Error?: Yes
                               CRC Data: No
                       Reclamation Type:
            Overwrite Data when Deleted:
                      Deduplicate Data?: No
   Processes For Identifying Duplicates:
              Duplicate Data Not Stored:
                         Auto-copy Mode: Client Contains Data Deduplicated by 
Client?: No


tsm: ARDTSM1>q stgpool STDHRLTO5W f=d

                      Storage Pool Name: STDHRLTO5W
                      Storage Pool Type: Primary
                      Device Class Name: ULTRIUM5W
                     Estimated Capacity: 0.0 M
                     Space Trigger Util:
                               Pct Util: 0.0
                               Pct Migr: 0.0
                            Pct Logical: 0.0
                           High Mig Pct: 90
                            Low Mig Pct: 70
                        Migration Delay: 0
                     Migration Continue: Yes
                    Migration Processes: 1
                  Reclamation Processes: 1
                      Next Storage Pool:
                   Reclaim Storage Pool:
                 Maximum Size Threshold: No Limit
                                 Access: Read/Write
                            Description:
                      Overflow Location:
                  Cache Migrated Files?:
                             Collocate?: Group
                  Reclamation Threshold: 50
              Offsite Reclamation Limit:
        Maximum Scratch Volumes Allowed: 1,000
         Number of Scratch Volumes Used: 0
          Delay Period for Volume Reuse: 0 Day(s)
                 Migration in Progress?: No
                   Amount Migrated (MB): 0.00
       Elapsed Migration Time (seconds): 0
               Reclamation in Progress?: No
         Last Update by (administrator): ADMIN
                  Last Update Date/Time: 03/16/20   23:43:45
               Storage Pool Data Format: Native
                   Copy Storage Pool(s):
                    Active Data Pool(s):
                Continue Copy on Error?: Yes
                               CRC Data: No
                       Reclamation Type: Threshold
            Overwrite Data when Deleted:
                      Deduplicate Data?: No
   Processes For Identifying Duplicates:
              Duplicate Data Not Stored:
                         Auto-copy Mode: Client Contains Data Deduplicated by 
Client?: No


tsm: ARDTSM1>


and also have scratch tapes in library

tsm: ARDTSM1>q libvol

Library Name     Volume Name     Status Owner          Last Use Home        
Device Element     Type
------------     -----------     ---------------- ----------
---------     -------     ------
LIBIBM3500       RDP000LV
Scratch                                           2,734       LTO
LIBIBM3500       RDP003LV
Scratch                                           2,743       LTO
LIBIBM3500       RDP004LV
Scratch                                           2,755       LTO
LIBIBM3500       RDP005LV
Scratch                                           2,760       LTO
LIBIBM3500       RDP006LV
Scratch                                           2,762       LTO
LIBIBM3500       RDP007LV
Scratch                                           2,777       LTO
LIBIBM3500       RDP008LV
Scratch                                           2,794       LTO
LIBIBM3500       RDP009LV
Scratch                                           2,823       LTO
LIBIBM3500       RDP010LV
Scratch                                           2,824       LTO
LIBIBM3500       RDP011LV
Scratch                                           2,904       LTO


But when I try to archive something I get this error

"ANR1639I Attributes changed for node HRARCH1P: TCP Name from vme-file01 to 
vme-file02, TCP Address from 192.168.81.63 to 192.168.81.57, GUID from 
86.73.fd.ce.e1.ae.11.e3.9a.b9.ac.16.2d.b4.c2.58 to 
87.48.e7.28.2c.40.11.e4.b3.c3.6c.3b.e5.b9.54.48.
ANR1405W Scratch volume mount request denied - no scratch volume available.
ANR1405W Scratch volume mount request denied - no scratch volume available.
ANR1405W Scratch volume mount request denied - no scratch volume available.
ANR0522W Transaction failed for session 4053 for node HRARCH1P (Linux
x86-64) - no space available in storage pool STDHRDISKW and all successor pools.
ANR0480W Session 4053 for node HRARCH1P (Linux x86-64) terminated - connection 
with client severed.
"


I have no idea how I can debug / fix this

Thank you very much


--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

Reply via email to