Re: strange twist on FAILED [data timeout] -- compress none not working correctly

2009-05-13 Thread Paul Bijnens

On 2009-05-13 00:24, James D. Freels wrote:

I got into debugging mode and discovered why this was happening.
Apparently this upgrade, or a recent change in AMANDA, now forces (or by
error) the data to be compressed with gzip before going from the client
to the server.  Even if I specify compress none (which I always have)
or compress client none and compress server none, the gzip is still
going on.  


I have never seen this before, and the gzip process is taking forever
for my large filesystems.  I can't seem to get it to stop.


Note that the index is always gzipped, even when you using compress none.
Did you came to this conclusion because there is a gzip process running?
Or do you have indeed two gzip processes, and one (or both??) are slowing
down everything?
Or do you have a pathological filesystem where the name of the files
is more data than the contents of the files?

To hunt down what a client is doing I often use lsof on the gnutar
process to find out what file it is currently reading.  Maybe you
got stuck on some unresponsive network filesystem (smb is frequently
doing that -- I avoid mounting smb filesystem since I got burned badly).
To get even more details you can strace the process as well.

--
Paul Bijnens, Xplanation Technology ServicesTel  +32 16 397.525
Interleuvenlaan 86, B-3001 Leuven, BELGIUM  Fax  +32 16 397.552
***
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, ~., *
* stop, end, ^]c, +++ ATH, disconnect,  halt,  abort,  hangup,  KJOB, *
* ^X^X,  :D::D,  kill -9 1,  kill -1 $$,  shutdown,  init 0,  Alt-F4, *
* Alt-f-e, Ctrl-Alt-Del, Alt-SysRq-reisub, Stop-A, AltGr-NumLock, ... *
* ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
***


amanda and smbclient

2009-05-13 Thread Brandon Metcalf
I realize this is more of a smbclient issue, but thought someone might
have an answer as to how files residing on Windows shares can be
backed up without ensuring they are not in use.  The problem is

  sendbackup: info BACKUP=/usr/bin/smbclient
  sendbackup: info RECOVER_CMD=/bin/gzip -dc |/usr/bin/smbclient -xpGf - ...
  sendbackup: info COMPRESS_SUFFIX=.gz
  sendbackup: info end
  | Domain=[MAI] OS=[Windows Server 2003 3790 Service Pack 1] Server=[Windows
  Server 2003 5.2]
  ? NT_STATUS_SHARING_VIOLATION opening remote file ...

Is my only solution making sure that all users shutdown the
applications that are opening the files causing this problem?

Thanks.

-- 
Brandon


Re: strange twist on FAILED [data timeout] -- compress none not working correctly

2009-05-13 Thread James D. Freels
Thanks to all who have responded so far.

1st, I am aware that what is in debian and ubuntu is not the latest and
greatest.  Indeed, it appears to me that what is in ubuntu is just
debian.  I did not see any bug reports in the ubuntu archives for
amanda.  So, I suspect that there is no real active development over on
the ubuntu tree for amanda, but I do not really know.  I am grateful
that ubuntu does offer amanda in their archives.

Under the Debian tree, there are 3 bug reports.  2 of these are wishlist
items, and the 3rd is a security-related bug that is unrelated to this
issue.

It could be that the upgrade to jaunty has nothing to do with this at
all, and it is just my own suspicion.  I have never had timeout errors
before.

My workaround worked last night and I was able to get all the backups
working by running a large value for dtimeout.

I will work on your recommendations below, and report back if I find
anything.  I think I only have a single gzip process in which case I am
just seeing a long time for gzip of the index.  This would make sense.
As to why my index takes longer, don't know.  

On Wed, 2009-05-13 at 10:01 +0200, Paul Bijnens wrote:
 On 2009-05-13 00:24, James D. Freels wrote:
  I got into debugging mode and discovered why this was happening.
  Apparently this upgrade, or a recent change in AMANDA, now forces (or by
  error) the data to be compressed with gzip before going from the client
  to the server.  Even if I specify compress none (which I always have)
  or compress client none and compress server none, the gzip is still
  going on.  
  
  I have never seen this before, and the gzip process is taking forever
  for my large filesystems.  I can't seem to get it to stop.
 
 Note that the index is always gzipped, even when you using compress none.
 Did you came to this conclusion because there is a gzip process running?
 Or do you have indeed two gzip processes, and one (or both??) are slowing
 down everything?
 Or do you have a pathological filesystem where the name of the files
 is more data than the contents of the files?
 
 To hunt down what a client is doing I often use lsof on the gnutar
 process to find out what file it is currently reading.  Maybe you
 got stuck on some unresponsive network filesystem (smb is frequently
 doing that -- I avoid mounting smb filesystem since I got burned badly).
 To get even more details you can strace the process as well.
 
-- 
James D. Freels, Ph.D.
Oak Ridge National Laboratory
freel...@ornl.gov