I have a recycler issue. It does not finish draining tape in a timely
fashion if at all. In the past I have had a similar problem but in
those cases "showqueue -v" would show the files with an issues and
explain what the issue was. I don't see the issue here. Sometimes it
works like I expect, two recycler runs and tape has been labeled. Other
times it goes on for months without completely draining...
Here is an example of where once the tape was flagged to recycle it has
not finished after 8 runs of the recycler.
[root at samqfs1 /qfs1 #] cat /var/tmp/recycler.log | grep ti:100858 | grep
archive
Waiting for VSN ti:100858 to drain, it still has 456 active archive
copies.
Waiting for VSN ti:100858 to drain, it still has 17 active archive
copies.
Waiting for VSN ti:100858 to drain, it still has 17 active archive
copies.
Waiting for VSN ti:100858 to drain, it still has 17 active archive
copies.
Waiting for VSN ti:100858 to drain, it still has 17 active archive
copies.
Waiting for VSN ti:100858 to drain, it still has 17 active archive
copies.
Waiting for VSN ti:100858 to drain, it still has 17 active archive
copies.
Waiting for VSN ti:100858 to drain, it still has 17 active archive
copies.
sfind confirms the 17 active archive copies.
[root at samqfs1 /qfs1 #] sfind . -vsn 100858
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrapp.dbf.Z
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrapp1.dbf.Z
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrlarge.dbf.Z
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/psimage.dbf.Z
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/psindex.dbf.Z
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/pstemp.dbf.Z
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/psundo.Z
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/pylarge.dbf.Z
./cubs/cubsdb1/HPPRD_20Nov2008/oracle/hphist.dbf.Z
./cubs/cubsdb1/HPPRD_20Nov2008/oracle/psindex.dbf.Z
./cubs/cubsdb1/HPPRD_20Nov2008/oracle/pstemp.dbf.Z
./cubs/cubsdb1/HPPRD_20Nov2008/oracle/psundo.Z
./cubs/cubsdb1/HPPRD_20Nov2008/oracle/pylarge.dbf.Z
./cubs/cubsdb4/PSDWH_19Nov2008/oracle/dw_data.dbf.Z
./cubs/cubsdb4/PSDWH_19Nov2008/oracle/temp.dbf.Z
./cubs/cubsdb4/PSDWH_19Nov2008/oracle/undo.dbf.Z
./poolenode/poolenode_clone/QPC009/1106744052.0
[root at samqfs1 /qfs1 #] sls -D
./poolenode/poolenode_clone/QPC009/1106744052.0
./poolenode/poolenode_clone/QPC009/1106744052.0:
mode: -rw------- links: 1 owner: root group: other
length: 2060386304 admin id: 0 inode: 5109670.4
damaged; offline;
copy 1: ----D Nov 22 02:47 25f71.1 ti 100867
copy 2: -r--D Nov 22 02:47 3fc58.1 ti 100858
access: Nov 22 02:47 modification: Nov 22 01:53
changed: Nov 22 01:53 attributes: Nov 22 01:52
creation: Nov 22 01:52 residence: Nov 28 20:00
[root at samqfs1 /qfs1 #] sls -D
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrapp1.dbf.Z
./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrapp1.dbf.Z:
mode: -rw-r--r-- links: 1 owner: oracle group: dba
length: 128595115 admin id: 0 inode: 1221596.5
damaged; offline;
copy 1: ----D Nov 20 02:46 ffc7.1 ti 100846
copy 2: -r--D Nov 20 02:46 58e9.1 ti 100858
access: Nov 19 02:01 modification: Nov 18 00:15
changed: Nov 20 02:25 attributes: Nov 18 01:42
creation: Nov 18 01:42 residence: Dec 3 11:32
[root at samqfs1 /qfs1 #] sls -D
./poolenode/poolenode_clone/QPC009/1106744052.0
./poolenode/poolenode_clone/QPC009/1106744052.0:
mode: -rw------- links: 1 owner: root group: other
length: 2060386304 admin id: 0 inode: 5109670.4
damaged; offline;
copy 1: ----D Nov 22 02:47 25f71.1 ti 100867
copy 2: -r--D Nov 22 02:47 3fc58.1 ti 100858
access: Nov 22 02:47 modification: Nov 22 01:53
changed: Nov 22 01:53 attributes: Nov 22 01:52
creation: Nov 22 01:52 residence: Nov 28 20:00
[root at samqfs1 /qfs1 #] showqueue -v
Filesystem qfs1:
Scan list
0 2008-12-08 09:26:47 qfs1 12__
poolenode/poolenode_clone/QPC011
1 2008-12-08 09:52:21 qfs1 12__
itcback/itcback_clone/QFC007
2 2009-12-08 09:26:07 background ---- .
Archive requests
qfs1.qfs1.1.0 create 2008-12-08 09:26:14
files:6 space: 33.753M flags:
Start archive at 2008-12-08 10:26:14 | 500000 files | 32.0G bytes
type:f ino:5088249 s:0/f:0 space: 32.500k time:1228744507
priority:0
poolenode/poolenode_clone/QPC011/4122251112.0
type:f ino:5088248 s:0/f:0 space: 6.094M time:1228744510
priority:0
poolenode/poolenode_clone/QPC011/4105473898.0
type:f ino:5088247 s:0/f:0 space: 32.500k time:1228744508
priority:0
poolenode/poolenode_clone/QPC011/4021587822.0
type:f ino:5088245 s:0/f:0 space: 32.500k time:1228744509
priority:0
poolenode/poolenode_clone/QPC011/4055142254.0
type:f ino:5088244 s:0/f:0 space: 27.532M time:1228744517
priority:0
poolenode/poolenode_clone/QPC011/3988033391.0
type:f ino:5088243 s:0/f:0 space: 32.500k time:1228744513
priority:0
poolenode/poolenode_clone/QPC011/3954478960.0
qfs1.qfs1.2.1 create 2008-12-08 09:26:14
files:6 space: 33.753M flags:
Start archive at 2008-12-08 10:26:14 | 500000 files | 32.0G bytes
type:f ino:5088249 s:0/f:0 space: 32.500k time:1228744507
priority:0
poolenode/poolenode_clone/QPC011/4122251112.0
type:f ino:5088248 s:0/f:0 space: 6.094M time:1228744510
priority:0
poolenode/poolenode_clone/QPC011/4105473898.0
type:f ino:5088247 s:0/f:0 space: 32.500k time:1228744508
priority:0
poolenode/poolenode_clone/QPC011/4021587822.0
type:f ino:5088245 s:0/f:0 space: 32.500k time:1228744509
priority:0
poolenode/poolenode_clone/QPC011/4055142254.0
type:f ino:5088244 s:0/f:0 space: 27.532M time:1228744517
priority:0
poolenode/poolenode_clone/QPC011/3988033391.0
type:f ino:5088243 s:0/f:0 space: 32.500k time:1228744513
priority:0
poolenode/poolenode_clone/QPC011/3954478960.0
--
mike cannon
mikec at clemson.edu <mailto:mikec at clemson.edu>
864.650.2577 (cell)
864.656.3809 (office)
computing & information technology
340 computer court
anderson, sc 29625
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/sam-qfs-discuss/attachments/20081208/171b45a2/attachment.html>