Re: [Catalyst] Have exceeded the maximum number of attempts (1000) to open temp file/dir

2013-10-31 Thread John Napiorkowski
Bill,

I see over here (latest release)

https://metacpan.org/source/JJNAPIORK/Catalyst-Runtime-5.90049_005/lib/Catalyst/Request.pm#L260


am calling -cleanup(1) when we create the HTTP::Body.  is that not enough to 
cleanup tmp files ?

regarding the tmp file thing, wow I have no idea, but I hope you find out and 
report it to us!

Johnn



On Friday, October 25, 2013 8:53 AM, Bill Moseley mose...@hank.org wrote:
 
I have an API where requests can include JSON.  HTTP::Body saves those off to 
temp files.

Yesterday got a very large number of errors:

[ERROR] Caught exception in engine Error in tempfile() using /tmp/XX: 
Have exceeded the maximum number of attempts (1000) to open temp file/dir

The File::Temp docs say:

If you are forking many processes in parallel that are all creating
temporary files, you may need to reset the random number seed using
srand(EXPR) in each child else all the children will attempt to walk
through the same set of random file names and may well cause
themselves to give up if they exceed the number of retry attempts.

We are running under mod_perl.   Could it be as simple as the procs all were in 
sync?   I'm just surprised this has not happened before.   Is there another 
explanation?

Where would you suggest to call srand()?


Another problem, and one I've commented on before, is that HTTP::Body doesn't 
use File::Temp's unlink feature and depends on Catalyst cleaning up.  This 
results in orphaned files left on temp disk.




-- 
Bill Moseley
mose...@hank.org 
___
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/___
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/


Re: [Catalyst] Have exceeded the maximum number of attempts (1000) to open temp file/dir

2013-10-31 Thread Bill Moseley
On Thu, Oct 31, 2013 at 2:44 PM, John Napiorkowski jjn1...@yahoo.comwrote:


 am calling -cleanup(1) when we create the HTTP::Body.  is that not enough
 to cleanup tmp files ?


I haven't look at this in a while, but I think it's described here:

https://rt.cpan.org/Public/Bug/Display.html?id=84004

HTTP::Body assumes $self-{upload} exists before deleting, and that might
not be created yet.

I have my own version for handling 'multipart/form-data' that sets UNLINK
= 1.


Now, the application/octet-stream handling is another issue.  There
HTTP::Body uses the default File::Temp (e.g. UNLINK = 1), but I'm still
finding a large number of those files left around.

In my dev environment I have not been able to make it leave files on /tmp.
 On production I can run watch 'ls /tmp | wc -l' and see the counts
increase and decrease so I know files are being deleted, but every once in
a while a file gets left behind.   I don't see segfaults in the logs, and
I've tested with Apache's MaxRequestPerChild low (so recycling child
processes often) and not seeing that leave files behind.

I'm going to update our copy of HTTP::Body and put the process ID in the
temp file template to essentially namespace and use cron to keep /tmp
cleaner.  But, I still have yet to figure out why those are left behind.
With UNLINK = 1 they should not be left there.   File::Temp doesn't appear
to check the return value from unlink.

They come and go but some stick around:

$ for i in $(seq 10); do ls /tmp | wc -l; sleep 2; done
23861
23865
23863
23864
23862
23862
23865
23865
23864
23866

$ ls -lt /tmp | head -2
total 95492
-rw--- 1 tii-rest tii-rest   14 Oct 31 16:40 Nudjp9WDNy

$ ls -lt /tmp | tail -2
-rw--- 1 tii-rest tii-rest   16 Oct 28 13:36 NWwxOhwhRW
-rw--- 1 tii-rest tii-rest   16 Oct 28 13:35 Ll1Ze0TNPL






 regarding the tmp file thing, wow I have no idea, but I hope you find out
 and report it to us!

 Johnn


On Friday, October 25, 2013 8:53 AM, Bill Moseley mose...@hank.org
 wrote:
   I have an API where requests can include JSON.  HTTP::Body saves those
 off to temp files.

 Yesterday got a very large number of errors:

 [ERROR] Caught exception in engine Error in tempfile() using
 /tmp/XX: Have exceeded the maximum number of attempts (1000) to
 open temp file/dir

 The File::Temp docs say:

 If you are forking many processes in parallel that are all creating
 temporary files, you may need to reset the random number seed using
 srand(EXPR) in each child else all the children will attempt to walk
 through the same set of random file names and may well cause
 themselves to give up if they exceed the number of retry attempts.


 We are running under mod_perl.   Could it be as simple as the procs all
 were in sync?   I'm just surprised this has not happened before.   Is there
 another explanation?

 Where would you suggest to call srand()?


 Another problem, and one I've 
 commentedhttps://rt.cpan.org/Public/Bug/Display.html?id=84004on before, is 
 that HTTP::Body doesn't use File::Temp's unlink feature and
 depends on Catalyst cleaning up.  This results in orphaned files left on
 temp disk.





 --
 Bill Moseley
 mose...@hank.org

 ___
 List: Catalyst@lists.scsys.co.uk
 Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
 Searchable archive:
 http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
 Dev site: http://dev.catalyst.perl.org/



 ___
 List: Catalyst@lists.scsys.co.uk
 Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
 Searchable archive:
 http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
 Dev site: http://dev.catalyst.perl.org/




-- 
Bill Moseley
mose...@hank.org
___
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/


[Catalyst] Have exceeded the maximum number of attempts (1000) to open temp file/dir

2013-10-25 Thread Bill Moseley
I have an API where requests can include JSON.  HTTP::Body saves those off
to temp files.

Yesterday got a very large number of errors:

[ERROR] Caught exception in engine Error in tempfile() using
/tmp/XX: Have exceeded the maximum number of attempts (1000) to
open temp file/dir

The File::Temp docs say:

If you are forking many processes in parallel that are all creating
 temporary files, you may need to reset the random number seed using
 srand(EXPR) in each child else all the children will attempt to walk
 through the same set of random file names and may well cause
 themselves to give up if they exceed the number of retry attempts.


We are running under mod_perl.   Could it be as simple as the procs all
were in sync?   I'm just surprised this has not happened before.   Is there
another explanation?

Where would you suggest to call srand()?


Another problem, and one I've
commentedhttps://rt.cpan.org/Public/Bug/Display.html?id=84004on
before, is that HTTP::Body doesn't use File::Temp's unlink feature and
depends on Catalyst cleaning up.  This results in orphaned files left on
temp disk.





-- 
Bill Moseley
mose...@hank.org
___
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/