Wow, that does seem odd.  I wonder if there's some evil sharing going on 
between mod_perl processes or something...



On Monday, November 4, 2013 8:26 AM, Bill Moseley <mose...@hank.org> wrote:
 





On Fri, Oct 25, 2013 at 6:51 AM, Bill Moseley <mose...@hank.org> wrote:


>
>[ERROR] "Caught exception in engine "Error in tempfile() using 
>/tmp/XXXXXXXXXX: Have exceeded the maximum number of attempts (1000) to open 
>temp file/dir


I don't really see how this can be a Catalyst issue, but I can't reproduce it 
outside of Catalyst -- and outside of our production environment.   

Can anyone think of anything else that might be going on here?   


The template has 10 "X" that are replaced by I think 63 random ascii 
characters.   63^10 is a huge number of random strings.   File::Temp only loops 
when sysopen returns EEXISTS -- that is, when sysopen fails AND the error is 
that the file already exists.

Sure, there's 50 web processes but the odds of them all being in exact 
lock-step with calling rand() is unlikely.  And even if they started out that 
way if two processes opened the exact same name at the same time one process 
would just try the next random name and be done.



I have something like 26K files in /tmp, so nothing compared to 63^10.   And 
each web server is only seeing about 10 request/sec.

It's just not making sense.


Again, I'm unable to replicate the problem with a simple test script that is 
designed to clash.

I fork 50 (or more) child processes to replicate the web server processes and 
then in each one I do this:



        # Wait until top of the second so each child procss starts about the 
same time.

        my $t = time();  # Time::HiRes
        sleep( int( $t ) + 1 - $t );


        for ( 1 .. 500 ) {

            my $fh = File::Temp->new(
                TEMPLATE => 'bill_XXXXX',
                DIR => '/tmp',
            );

        }


And never see any contention.

 

>
>The File::Temp docs say:
>
>
>If you are forking many processes in parallel that are all creating
>>temporary files, you may need to reset the random number seed using
>>srand(EXPR) in each child else all the children will attempt to walk
>>through the same set of random file names and may well cause
>>themselves to give up if they exceed the number of retry attempts.
>
>
>We are running under mod_perl.   Could it be as simple as the procs all were 
>in sync?   I'm just surprised this has not happened before.   Is there another 
>explanation?
>
>
>Where would you suggest to call srand()?
>
>
>
>
>Another problem, and one I've commented on before, is that HTTP::Body doesn't 
>use File::Temp's unlink feature and depends on Catalyst cleaning up.  This 
>results in orphaned files left on temp disk.
>
>
>
>
>
>
>
>
>
>-- 
>Bill Moseley
>mose...@hank.org 



-- 
Bill Moseley
mose...@hank.org 

_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/
_______________________________________________
List: Catalyst@lists.scsys.co.uk
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
Dev site: http://dev.catalyst.perl.org/

Reply via email to