On 5/11/2013 1:24 AM, Bill Moseley wrote:
On Fri, Oct 25, 2013 at 6:51 AM, Bill Moseley <mose...@hank.org
<mailto:mose...@hank.org>> wrote:
[ERROR] "Caught exception in engine "Error in tempfile() using
/tmp/XXXXXXXXXX: Have exceeded the maximum number of attempts
(1000) to open temp file/dir
I don't really see how this can be a Catalyst issue, but I can't
reproduce it outside of Catalyst -- and outside of our production
environment.
Can anyone think of anything else that might be going on here?
I'd be thinking along the lines of mod_perl is evil. From a quick google
of "mod_perl srand" there seem to be some similar cases. And a where to
call srand in this post:
http://blogs.perl.org/users/brian_phillips/2010/06/when-rand-isnt-random.html
Give it a try.
The template has 10 "X" that are replaced by I think 63 random ascii
characters. 63^10 is a huge number of random strings. File::Temp
only loops when sysopen returns EEXISTS -- that is, when sysopen fails
AND the error is that the file already exists.
Sure, there's 50 web processes but the odds of them all being in exact
lock-step with calling rand() is unlikely. And even if they started
out that way if two processes opened the exact same name at the same
time one process would just try the next random name and be done.
I have something like 26K files in /tmp, so nothing compared to 63^10.
And each web server is only seeing about 10 request/sec.
It's just not making sense.
Again, I'm unable to replicate the problem with a simple test script
that is designed to clash.
I fork 50 (or more) child processes to replicate the web server
processes and then in each one I do this:
# Wait until top of the second so each child procss starts
about the same time.
my $t = time(); # Time::HiRes
sleep( int( $t ) + 1 - $t );
for ( 1 .. 500 ) {
my $fh = File::Temp->new(
TEMPLATE => 'bill_XXXXX',
DIR => '/tmp',
);
}
And never see any contention.
The File::Temp docs say:
If you are forking many processes in parallel that are all
creating
temporary files, you may need to reset the random number seed
using
srand(EXPR) in each child else all the children will attempt
to walk
through the same set of random file names and may well cause
themselves to give up if they exceed the number of retry attempts.
We are running under mod_perl. Could it be as simple as the
procs all were in sync? I'm just surprised this has not happened
before. Is there another explanation?
Where would you suggest to call srand()?
Another problem, and one I've commented
<https://rt.cpan.org/Public/Bug/Display.html?id=84004> on before,
is that HTTP::Body doesn't use File::Temp's unlink feature and
depends on Catalyst cleaning up. This results in orphaned files
left on temp disk.
--
Bill Moseley
mose...@hank.org <mailto:mose...@hank.org>
--
Bill Moseley
mose...@hank.org <mailto:mose...@hank.org>
_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
---
This email is free from viruses and malware because avast! Antivirus protection
is active.
http://www.avast.com
_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/