Hi Mike,

I was just in Seneca visiting friends at Thanksgiving.  Go Tigers!

I've seen this before - the files that won't rearchive are offline &  
marked damaged.  Here's a quick step-by-step I wrote up for my guys to  
get through it:

===

(Paraphrased from the SAM-FS Disaster Recovery doc:  
http://docs.sun.com/app/docs/doc/817-4090-10)

*** Identify the tape and position of the damaged copy:

sls -D /sam/jobs/project1/importantfile.mov

/sam/jobs/project1/importantfile.mov:
mode: -rwxrwxr-x  links:   1  owner: bmah      group: Prod
length:    593958  admin id:      0  inode:   4579917.9
damaged;  offline;
copy 2: ----D Dec  2 03:26     d7a01.2ef2592 li IF0000
copy 3: ----D Dec  2 07:20    291246.2ef2592 li IF0001
access:      Dec 21 15:47  modification: Dec 21 18:50
changed:     Dec 21 18:50  attributes:   Nov 29 02:10
creation:    Nov 29 02:10  residence:    Dec 16 03:15

In this example, copy 2 and copy 3 are both damaged. We will attempt  
to restore copy 2, which is on tape IF0000 at position d7a01.

     * Create a restore directory -- preferably in a non-archived  
location, such as /sam/temp:
           o mkdir /sam/temp/restore
           o cd /sam/temp/restore

     * Create a "request" file that points to the appropriate position  
on tape:
           o request -p 0xd7a01 -m li -v IF0000 /sam/temp/restore/ 
requestfile1

     * Run the star command to read the entire contents of the tarball  
back, using the "request file" as the pointer.
           o star -xv -b 512 -f /sam/temp/restore/requestfile1
           o Note: in some cases, if you encounter an error, you may  
need to use star -xiv. See the star man page for info on how to use  
star.

This should read back all the files that were archived in that  
tarball. When completed, check that the file in question was restored  
& is intact, then replace the damaged file with the restored copy.  
Then remove the rest of the restored data from /sam/temp/restore.

===

On Dec 8, 2008, at 10:29 AM, Mike Cannon wrote:

> I have a recycler issue.  It does not finish draining tape in a  
> timely fashion if at all.  In the past I have had a similar problem  
> but in those cases "showqueue -v" would show the files with an  
> issues and explain what the issue was.  I don't see the issue here.   
> Sometimes it works like I expect, two recycler runs and tape has  
> been labeled.  Other times it goes on for months without completely  
> draining...
>
> Here is an example of where once the tape was flagged to recycle it  
> has not finished after 8 runs of the recycler.
>
> [root at samqfs1 /qfs1 #] cat /var/tmp/recycler.log | grep ti:100858 |  
> grep archive
> Waiting for VSN ti:100858 to drain, it still has 456 active archive  
> copies.
> Waiting for VSN ti:100858 to drain, it still has 17 active archive  
> copies.
> Waiting for VSN ti:100858 to drain, it still has 17 active archive  
> copies.
> Waiting for VSN ti:100858 to drain, it still has 17 active archive  
> copies.
> Waiting for VSN ti:100858 to drain, it still has 17 active archive  
> copies.
> Waiting for VSN ti:100858 to drain, it still has 17 active archive  
> copies.
> Waiting for VSN ti:100858 to drain, it still has 17 active archive  
> copies.
> Waiting for VSN ti:100858 to drain, it still has 17 active archive  
> copies.
>
> sfind confirms the 17 active archive copies.
> [root at samqfs1 /qfs1 #] sfind . -vsn 100858
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrapp.dbf.Z
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrapp1.dbf.Z
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrlarge.dbf.Z
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/psimage.dbf.Z
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/psindex.dbf.Z
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/pstemp.dbf.Z
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/psundo.Z
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/pylarge.dbf.Z
> ./cubs/cubsdb1/HPPRD_20Nov2008/oracle/hphist.dbf.Z
> ./cubs/cubsdb1/HPPRD_20Nov2008/oracle/psindex.dbf.Z
> ./cubs/cubsdb1/HPPRD_20Nov2008/oracle/pstemp.dbf.Z
> ./cubs/cubsdb1/HPPRD_20Nov2008/oracle/psundo.Z
> ./cubs/cubsdb1/HPPRD_20Nov2008/oracle/pylarge.dbf.Z
> ./cubs/cubsdb4/PSDWH_19Nov2008/oracle/dw_data.dbf.Z
> ./cubs/cubsdb4/PSDWH_19Nov2008/oracle/temp.dbf.Z
> ./cubs/cubsdb4/PSDWH_19Nov2008/oracle/undo.dbf.Z
> ./poolenode/poolenode_clone/QPC009/1106744052.0
>
> [root at samqfs1 /qfs1 #] sls -D ./poolenode/poolenode_clone/ 
> QPC009/1106744052.0
> ./poolenode/poolenode_clone/QPC009/1106744052.0:
>   mode: -rw-------  links:   1  owner: root      group: other
>   length: 2060386304  admin id:      0  inode:   5109670.4
>   damaged;  offline;
>   copy 1: ----D Nov 22 02:47     25f71.1    ti 100867
>   copy 2: -r--D Nov 22 02:47     3fc58.1    ti 100858
>   access:      Nov 22 02:47  modification: Nov 22 01:53
>   changed:     Nov 22 01:53  attributes:   Nov 22 01:52
>   creation:    Nov 22 01:52  residence:    Nov 28 20:00
> [root at samqfs1 /qfs1 #] sls -D ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/ 
> hrapp1.dbf.Z
> ./cubs/cubsdb1/HPPRD_18Nov2008/oracle/hrapp1.dbf.Z:
>   mode: -rw-r--r--  links:   1  owner: oracle    group: dba
>   length: 128595115  admin id:      0  inode:   1221596.5
>   damaged;  offline;
>   copy 1: ----D Nov 20 02:46      ffc7.1    ti 100846
>   copy 2: -r--D Nov 20 02:46      58e9.1    ti 100858
>   access:      Nov 19 02:01  modification: Nov 18 00:15
>   changed:     Nov 20 02:25  attributes:   Nov 18 01:42
>   creation:    Nov 18 01:42  residence:    Dec  3 11:32
> [root at samqfs1 /qfs1 #] sls -D ./poolenode/poolenode_clone/ 
> QPC009/1106744052.0
> ./poolenode/poolenode_clone/QPC009/1106744052.0:
>   mode: -rw-------  links:   1  owner: root      group: other
>   length: 2060386304  admin id:      0  inode:   5109670.4
>   damaged;  offline;
>   copy 1: ----D Nov 22 02:47     25f71.1    ti 100867
>   copy 2: -r--D Nov 22 02:47     3fc58.1    ti 100858
>   access:      Nov 22 02:47  modification: Nov 22 01:53
>   changed:     Nov 22 01:53  attributes:   Nov 22 01:52
>   creation:    Nov 22 01:52  residence:    Nov 28 20:00
>
> [root at samqfs1 /qfs1 #] showqueue -v
> Filesystem qfs1:
> Scan list
>   0 2008-12-08 09:26:47 qfs1             12__ poolenode/ 
> poolenode_clone/QPC011
>   1 2008-12-08 09:52:21 qfs1             12__ itcback/itcback_clone/ 
> QFC007
>   2 2009-12-08 09:26:07 background       ---- .
> Archive requests
> qfs1.qfs1.1.0 create 2008-12-08 09:26:14
>     files:6 space:  33.753M flags:
>     Start archive at 2008-12-08 10:26:14 | 500000 files |   32.0G  
> bytes
>     type:f ino:5088249 s:0/f:0 space:  32.500k time:1228744507  
> priority:0
>         poolenode/poolenode_clone/QPC011/4122251112.0
>     type:f ino:5088248 s:0/f:0 space:   6.094M time:1228744510  
> priority:0
>         poolenode/poolenode_clone/QPC011/4105473898.0
>     type:f ino:5088247 s:0/f:0 space:  32.500k time:1228744508  
> priority:0
>         poolenode/poolenode_clone/QPC011/4021587822.0
>     type:f ino:5088245 s:0/f:0 space:  32.500k time:1228744509  
> priority:0
>         poolenode/poolenode_clone/QPC011/4055142254.0
>     type:f ino:5088244 s:0/f:0 space:  27.532M time:1228744517  
> priority:0
>         poolenode/poolenode_clone/QPC011/3988033391.0
>     type:f ino:5088243 s:0/f:0 space:  32.500k time:1228744513  
> priority:0
>         poolenode/poolenode_clone/QPC011/3954478960.0
>
> qfs1.qfs1.2.1 create 2008-12-08 09:26:14
>     files:6 space:  33.753M flags:
>     Start archive at 2008-12-08 10:26:14 | 500000 files |   32.0G  
> bytes
>     type:f ino:5088249 s:0/f:0 space:  32.500k time:1228744507  
> priority:0
>         poolenode/poolenode_clone/QPC011/4122251112.0
>     type:f ino:5088248 s:0/f:0 space:   6.094M time:1228744510  
> priority:0
>         poolenode/poolenode_clone/QPC011/4105473898.0
>     type:f ino:5088247 s:0/f:0 space:  32.500k time:1228744508  
> priority:0
>         poolenode/poolenode_clone/QPC011/4021587822.0
>     type:f ino:5088245 s:0/f:0 space:  32.500k time:1228744509  
> priority:0
>         poolenode/poolenode_clone/QPC011/4055142254.0
>     type:f ino:5088244 s:0/f:0 space:  27.532M time:1228744517  
> priority:0
>         poolenode/poolenode_clone/QPC011/3988033391.0
>     type:f ino:5088243 s:0/f:0 space:  32.500k time:1228744513  
> priority:0
>         poolenode/poolenode_clone/QPC011/3954478960.0
>
> --
> mike cannon
> mikec at clemson.edu
> 864.650.2577 (cell)
> 864.656.3809 (office)
>
> computing & information technology
> 340 computer court
> anderson, sc 29625
>
> _______________________________________________
> sam-qfs-discuss mailing list
> sam-qfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/sam-qfs-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://mail.opensolaris.org/pipermail/sam-qfs-discuss/attachments/20081208/5b7f4949/attachment.html>

Reply via email to