Robert Nelson wrote:
> I'm looking into this.  I need to fix some other issues first.
>
>   
>> -----Original Message-----
>> From: [EMAIL PROTECTED] [mailto:bacula-users-
>> [EMAIL PROTECTED] On Behalf Of Doug Rintoul
>> Sent: Monday, April 23, 2007 10:29 AM
>> To: bacula-users@lists.sourceforge.net
>> Subject: Re: [Bacula-users] Still Unable To Truncate Files
>>
>> On Tue, 2007-04-03 at 01:45 -0700, Robert Nelson wrote:
>>     
>>>> Well I've done some testing. No matter what I did I was not able to
>>>> trigger this using files < 2GB. I set the maximum volume size at 2GB
>>>>         
>> so
>>     
>>>> the files were all about 1.9ish GB. I let it run for many days backing
>>>> up over 200GB of data and it did not trigger the truncate problem. I
>>>> purged large jobs which marked dozens of files to be recycled and it
>>>>         
>> had
>>     
>>>> no problem re-using those files. I would guess it created 160 files
>>>>         
>> and
>>     
>>>> recycled at least 100 of them - some more than once-
>>>>
>>>> I then used the label command to create 2 test volumes just over 2gb.
>>>>         
>> I
>>     
>>>> was able to get the error easily in a day. It did recycle them a few
>>>> times, but eventually it tripped up. I would guess I recycled these
>>>>         
>> two
>>     
>>>> volumes only 5-6 times before it couldnt truncate them.
>>>>
>>>> This is bizarre..... any more ideas? Im hoping I have it pegged this
>>>> time and its not another false alarm.
>>>>
>>>> In light of this do you still want the debugging output from those
>>>> specially compiled daemons?
>>>>
>>>>         
>>> If you have the debugging out available for the case when it failed that
>>> would be useful.  If not, hold off and I'll put in some more debug code
>>>       
>> to
>>     
>>> try and narrow it further with this new information.
>>>
>>>       
>> Has any more been discovered about this issue? I am seeing this
>> regularly on my systems as well. No problem when volumes are less than
>> 2GB. However volumes larger than 2GB trigger this bug. I really do not
>> want to set the maximum volume size to be 2GB and have to deal with 30+
>> volumes for one backup. I am currently resorting to deleting the volume
>> from the file system and creating a zero length file of the same name.
>> Anything I can do to help debug?
>>
>> Doug Rintoul
>> CanIL.
>>
>>
>>     
I have been running my bacula server with a small farm (5-6 servers) 
through a few full backups and many incremental since my original 
e-mail. For my infrastructure, the 2GB file limit doesn't really 
inconvenience me so I am leaving it at that.

I found the "2GB file" angle before I ran the debugging daemons so I 
never produced that output.

Just curious why is the 2GB file limitation an issue for you? With good 
performing disks file size shouldn't matter. Besides, I was under the 
impression smaller files sizes were better than a few huge files so that 
the restore processes can seek to individual files better since it 
treats files like linear tapes. Maybe 2GB is a little TOO small, but 
what would your target file size be?

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to