In case this jars any clever thoughts:    way back when,  I remember
when my size estimates got just big enough that the initial connection
started by the server had timed out.   The client had to create a
new connection to send the result ..... and it didn't have the right permission
to initiate the connection itself.  Something to look for ....
Deb Baddorf

On Jul 7, 2011, at 2:14 PM, Jean-Louis Martineau wrote:

> Can you post the exact error message you get from amanda?
> You said 'broken pipe', where do you get it.
> 
> 
> Telling 'tar estimate dying' or 'broken pipe' is useless if you don't show 
> how/where you get them.
> 
> The sendsize debug file looks good, please post the debug file that show the 
> error you get,
> 
> Jean-Louis
> 
> On 07/07/2011 02:30 PM, Brian Cuttler wrote:
>> 
>> Tim,
>> 
>> That is weird. I've got a couple of Solaris 10/x86 system that
>> are all ZFS except for the boot partitions which are UFS.
>> 
>> My systems have also recently begun failing on the boot partitions,
>> even though I'm using server estimates. The ZFS based DLEs are all
>> backing up just fine.
>> 
>> That is 1 - UFS partition.
>>       120 - ZFS partitions.
>> 
>> The errors I'm seeing are of the "broken pipe" type though.
>> 
>> I'm runing gtar 1.22. 
>> 
>> Please let me know if you find the cause for your problems.
>> 
>> 
>> On Thu, Jul 07, 2011 at 12:35:24PM -0500, Tim Johnson wrote:
>>> 
>>>   Hello all,
>>> 
>>>    Sorry to post this question, as I have run out of ideas.
>>> I am running a 2.6.1p2-3 server with the same as clients.
>>> Recently a couple of clients are not able to backup their
>>> / file system. All of the other file systems on them
>>> backup fine (/home /boot). After about 30 minutes the tar
>>> process just dies and the estimate doesnt seem to get to the
>>> server.  The / is only about 10gig and until recently been
>>> fine. These are old computers, however the /home part. is
>>> 20gig and has been fine.
>>> 
>>>    I am using tar (now 1.20 after downgrading from 1.23 for
>>> testing)
>>> I have also tried using just calsize to to see if it made
>>> a difference. The etimeout is set at 8000
>>> dtimeout 3000
>>> 
>>> and the tail of the sendsize debug file I will paste below.
>>> 
>>> Thanks for any input
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 310053628.431146: sendsize: estimate size for /home/hutton/ level 2: 1190 
>>> KB
>>> 1310053628.431246: sendsize: waiting for runtar "/home/hutton/" child
>>> 1310053628.431317: sendsize: after runtar /home/hutton/ wait
>>> 1310053628.432226: sendsize: done with amname /home/hutton/ dirname 
>>> /home/hutton spindle -1
>>> 1310053628.433612: sendsize: waiting for any estimate child: 1 running
>>> 1310053628.434024: sendsize: calculating for amname /, dirname /, spindle 
>>> -1 GNUTAR
>>> 1310053628.434155: sendsize: getting size via gnutar for / level 0
>>> 1310053628.436580: sendsize: pipespawnv: stdoutfd is 3
>>> 1310053628.437321: sendsize: Spawning "/usr/lib/amanda/runtar runtar 
>>> DailySet1 /bin/tar --create --file /dev/null --numeric-owner --director
>>> y / --one-file-system --listed-incremental 
>>> /var/lib/amanda/gnutar-lists/hutton.earth.northwestern.edu__0.new --sparse 
>>> --ignore-failed-read -
>>> -totals --exclude-from /tmp/amanda/sendsize._.20110707104708.exclude ." in 
>>> pipeline
>>> 1310053983.559905: sendsize: /bin/tar: ./var/run/cups/cups.sock: socket 
>>> ignored
>>> 1310053983.563763: sendsize: /bin/tar: ./var/run/dbus/system_bus_socket: 
>>> socket ignored
>>> 1310053983.565609: sendsize: /bin/tar: ./var/run/dirmngr/socket: socket 
>>> ignored
>>> 1310053984.228780: sendsize: Total bytes written: 9519984640 (8.9GiB, 
>>> 26MiB/s)
>>> 1310053984.235327: sendsize: .....
>>> 1310053984.235560: sendsize: estimate time for / level 0: 355.799
>>> 1310053984.235595: sendsize: estimate size for / level 0: 9296860 KB
>>> 1310053984.235661: sendsize: waiting for runtar "/" child
>>> 1310053984.235782: sendsize: after runtar / wait
>>> 1310053984.243582: sendsize: getting size via gnutar for / level 1
>>> 1310053984.602585: sendsize: pipespawnv: stdoutfd is 3
>>> 1310053984.603414: sendsize: Spawning "/usr/lib/amanda/runtar runtar 
>>> DailySet1 /bin/tar --create --file /dev/null --numeric-owner --director
>>> y / --one-file-system --listed-incremental 
>>> /var/lib/amanda/gnutar-lists/hutton.earth.northwestern.edu__1.new --sparse 
>>> --ignore-failed-read -
>>> -totals --exclude-from /tmp/amanda/sendsize._.20110707105304.exclude ." in 
>>> pipeline
>>> 1310054113.840859: sendsize: /bin/tar: ./var/run/cups/cups.sock: socket 
>>> ignored
>>> 1310054113.847115: sendsize: /bin/tar: ./var/run/dbus/system_bus_socket: 
>>> socket ignored
>>> 1310054113.849021: sendsize: /bin/tar: ./var/run/dirmngr/socket: socket 
>>> ignored
>>> 1310054114.177646: sendsize: Total bytes written: 408217600 (390MiB, 
>>> 3.1MiB/s)
>>> 1310054114.183554: sendsize: .....
>>> 1310054114.183736: sendsize: estimate time for / level 1: 129.581
>>> 1310054114.183769: sendsize: estimate size for / level 1: 398650 KB
>>> 1310054114.183806: sendsize: waiting for runtar "/" child
>>> 1310054114.186325: sendsize: after runtar / wait
>>> 1310054114.193680: sendsize: done with amname / dirname / spindle -1
>>> 1310054114.207862: sendsize: pid 2353 finish time Thu Jul  7 10:55:14 2011
>>> 
>> ---
>>    Brian R Cuttler                 brian.cutt...@wadsworth.org
>>    Computer Systems Support        (v) 518 486-1697
>>    Wadsworth Center                (f) 518 473-6384
>>    NYS Department of Health        Help Desk 518 473-0773
>> 
>> 
>> 
>> IMPORTANT NOTICE: This e-mail and any attachments may contain
>> confidential or sensitive information which is, or may be, legally
>> privileged or otherwise protected by law from further disclosure.  It
>> is intended only for the addressee.  If you received this in error or
>> from someone who was not authorized to send it to you, please do not
>> distribute, copy or use it or any attachments.  Please notify the
>> sender immediately by reply e-mail and delete this from your
>> system. Thank you for your cooperation.
>> 
>> 
> 

Reply via email to