Hello John,
 
I'm testing with tar on a new tape now. Should be a sufficient test since the 
problems
occurs on all newly added tapes. I'm testing it on a different tapedrive now 
since the
the full backup from the weekend is still running on the tapedrive in question. 
I will
run it on the drive in question too, see if there is a difference.
 
There are no errors in the bacula logs. I searched in the logs specifically on 
the
volumes / tapes that were written with a insufficient amount of bytes and some 
other
lines which might be suspect.
 
However i do recall that there were occurences in which this error message 
appeared
during labelling of new tapes some time back. It was not in the logs but i 
remembered
some words and i think it was this one: (copied from the internet)
 
block.c:994 Read error on fd=5 at file:blk 0:0 on device "Drive-1" 
/dev/nst0). ERR=Input/output error.
3000 OK label. VolBytes=64512 DVD=0 Volume="MSR100L3" Device="Drive-1" 
(/dev/nst0)
Catalog record for Volume "MSR100L3", Slot 1  successfully created.
But the insufficient amount of bytes being written also occurs on tapes which do
not give that error message.
 
I used the tapeinfo command and drive is compression capable and compression
enabled:
 
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
I would guess that this does not turn itself on and off at certain moments.
The weird thing is that on the tapes we had from the beginning bytecount
written to the tapes is sufficient.
 
Also tapes written with an insufficient byte count get unloaded and loaded
with the next one at a much lower block number:
 
01-Aug 22:52 fileserver-sd JobId 1043: End of Volume "XO7522L3" at 314:8710 on 
device "Tape1" (/dev/st0). Write of 64512 bytes got -1.
01-Aug 22:52 fileserver-sd JobId 1043: Re-read of last block succeeded.
01-Aug 22:52 fileserver-sd JobId 1043: End of medium on Volume "XO7522L3" 
Bytes=314,541,803,520 Blocks=4,875,709 at 01-Aug-2009 22:52.
 
(tape get's changed)
 
as opposed to:
 
15-Jul 01:00 fileserver-sd JobId 912: End of Volume "UP7499L3" at 782:10712 on 
device "Tape1" (/dev/st0). Write of 64512 bytes got -1.
15-Jul 01:00 fileserver-sd JobId 912: Re-read of last block succeeded.
15-Jul 01:00 fileserver-sd JobId 912: End of medium on Volume "UP7499L3" 
Bytes=782,641,004,544 Blocks=12,131,711 at 15-Jul-2009 01:00.
 
(tape get's changed)
 
I will post the results of the the test with tar as soon as possible.
 
Thanks,
 
Tony


>>> John Drescher <dresche...@gmail.com> 30-7-2009 14:28 >>>
On Thu, Jul 30, 2009 at 8:23 AM, John Drescher<dresche...@gmail.com> wrote:
> On Sat, Jul 18, 2009 at 6:03 AM, ict Mapper ict
> department<i...@mapperlithography.com> wrote:
>> Bacula version: (26 July 2008) x86_64-pc-linux-gnu debian 4.0
>> backup unit/changer: HP storageworks MSL2024 LTO3
>>
>> Hello,
>>
>> We have a real weird problem. Tapes are not fully written. But this is not 
>> in all cases. The problem seems to concentrate around our monthly backups.
>> When the tapes are labelled (by barcode) and being put in the "monthly" pool 
>> and written to, the tapes  are only filled up for not even 300 GB sometimes.
>> Sometimes not even more then slightly over 200GB and the next tape is 
>> loaded. A too less amount anyway. While running backups of the same files 
>> (both
>> full backups) (only in this case the weekly backup) the tapes are filled up 
>> easily for like 500 GB, sometimes even more then 600.
>>
>> What is tried until now:
>>
>> - labelling the tapes and putting them in the weekly pool first.
>>  later changing them over to the monthly pool and setting the
>>  retention period to that of a month again. -> No help...
>>
>> - labelling the tapes and putting them in the weekly pool first.
>>  later changing them over to the monthly pool but leaving the
>>  retention period of the tapes alone until the backup finished.
>>  -> No help...
>>
>> - Changing weekly tapes that was written to in normal amounts
>>  earlier during the weekly backup with tapes that have only
>>  ever been used for the weekly backup -> that helped...
>>
>> So we can deduct that the problem does not lie in the retention period. The 
>> problem does not have to do with (not on it's own) in which pool the tapes 
>> are
>> placed directly after labelling. However, if the tapes written to from the 
>> weekly full backup jobs are later used for the monthly jobs then it works 
>> fine.
>>
>> We backup multiple servers to one big set of tapes + a mysql dump of the 
>> bacula catalog db and mark the last used tape: used.
>> The configuration is done per server (in separate files) where a: backup job 
>> is mentioned, a restore job is mentioned, the fileset
>> is mentioned and a mention of the schedule directive which looks in another 
>> file. In this other file is determined: to which pool
>> should written, if a full or incremental job should be written and at which 
>> moment jobs start.
>>
>> The baculadir.conf file points to the server specific files. Also to files 
>> holding the pool definitions.
>>
>> If there is any part of the configuration that is interesting, let me know 
>> and i will send it.
>>
>>
>> I hope someone has an idea what could be causing this problem.
>>
>
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg36461.html 
>
> That looked like a hardware problem to me. Did you try filling a tape
> with tar or dd?
>
> Other than that are there any errors in your bacula logs for these jobs?
>

Sorry, I did not notice that this mail was from 12 days ago. I just
woke up and my eyes have not fully adjusted..

John
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to