First, Jon, sorry about the long lines.  In 10 years I never heard of an
email client
that didn't at least wrap long lines, if not automatically adding newlines. 
I'll try
to be more careful.

Next, Joshua, the NOTES section, I missed that one, sorry.  Here it is:

===
NOTES:
  planner: Last full dump of localhost://ptolemy/user1 on tape  overwritten
in 1 run.
  planner: Last full dump of localhost://ptolemy/user2 on tape  overwritten
in 1 run.
  planner: Adding new disk localhost://cassini/d.
  planner: Adding new disk localhost://peisin/c$.
  planner: Adding new disk localhost://peisin/d$.
  taper: tape DailySet1-17 kb 5373824 fm 4 writing file: Input/output error
  driver: going into degraded mode because of tape error.

Does this mean that it only managed to write 5 MB before the I/O error?

I don't see anything in /var/log/messages around the time of this backup, so
I suppose
I'll have to run another with the express purpose of hunting for fresh log
results... :)

As for tape capacity, the 225m AME tapes (exabyte brand) we're using w/ the 
Mammoth-2 drive are rated for 60 GB Uncompressed.  Based mainly on the 
FAQ-o-Matic, my tapetype entry is this:

define tapetype EXB-M2 {
    comment "Exabyte M2 drive"
    length 60000 mbytes
    filemark 200 kbytes
    speed 3300 kbytes
}

I have no idea if that is 100% correct, however: The tapetype program that
comes 
with the source gives me different results (and very very large filemark
numbers)
every time I run it. 

As for hardware compression, I have it turned off.  Here is a copy of the
dump file 
generated by M2Monitor, Exabyte's monitoring tool for the Mammoth-2 drives:

   Buffered Mode = BUFFERED
   Data Compression Enable = OFF
   Diagnostics: Tape History Log = DISABLE
   Disconnect Control = 0x08
      Modify Data Pointers = OFF
      Disconnect Immediate = ON
      Disconnect = NORMAL
   Gap Threshold = 0x00
   Logical Block Size = 0x00000400
   Maximum Burst Length = 0x00000000
   Mode Sense: Default Density = OFF
   Motion Threshold = 0x80
   Operator: Button Operation = WAIT_DONE
   Operator: Cleaning Mode = LIGHTS
   Operator: LCD Language = ENGLISH
   Product Identification = Mammoth2
   Reporting Modes = 0x60
      Setmark Option = ON
      Early Warning Option = OFF
   Request Sense: Clearing Sense Data = CLEAR
   Request Sense: EOM bit at LBOP = OFF
   SCSI: Command Queuing = NORMAL
   SCSI: Parity Error Handling = FAIL_RW
   SCSI: Synchronous Negotiation = RECEIVE
   SmartClean Mode = CLEAN_NORMAL
   Write Delay Time = 0x0000

I don't know what a lof of that stuff means, but I can see that compression
is turned off...  :P

Finally, if there's an ability to contribute to the Amanda project in some
way (probably monetarily, because as might be abundantly clear, I'm no
device programmer...), can someone point me to a link?

Thanks for your help, Jon and Joshua.

Jesse

> On Wed, 26 Jun 2002 at 10:13am, Jesse Griffis wrote
> 
> > I am getting an odd end-of-tape error on every tape, every time I try 
> > to write a very large file.  Below is the Amanda report for the last 
> > attempt I tried.  I'm chopping out some of the extraneous detail (the 
> 
> > 
> > These dumps were to tape DailySet1-17.
> > *** A TAPE ERROR OCCURRED: [[writing file: Input/output error]].
> 
> Note that here amanda is just telling you what the OS told it.  For more 
> detail on exactly what sort of I/O error occurred, look in 
> /var/log/messages.
> 
> > Tape Time (hrs:min)        0:12       0:12       0:00
> > Tape Size (meg)          4590.4     4590.4        0.0
> > Tape Used (%)               7.7        7.7        0.0
> > Filesystems Taped             3          3          0
> > Avg Tp Write Rate (k/s)  6776.7     6776.7        -- 
> 
> In here there should be a NOTES section.  That will tell you *exactly* how
> 
> much data got to tape before the I/O error.  As Jon mentioned, make sure 
> that you're not using hardware compressions, as you are using software 
> compression.
> 
> -- 
> Joshua Baker-LePain
> Department of Biomedical Engineering
> Duke University
> 
> 
> 
> 
> 

Reply via email to