Branching and release scheduling

2004-11-16 Thread Manoj Kasichainula
We had a good discussion over lunch today on our release processes and 
how to have stable releases while making new feature development as fun 
and easy for the geeks as possible.

The main branch is always the development branch, with new features 
added whenever people see fit. There is never a feature freeze or even 
development slowdown on this branch. There should be regular releases 
based on this, roughly every couple of weeks, but basically whenever 
anyone is in the mood to roll a tarball. But there really needs to be a 
good script for rolling a tarball easily and quickly.

These tarballs get full fledged announcements on the website. They get 
odd-minor-version releases, e.g. 2.1.x while the latest stable branch is 
at 2.0.x. They are only alphas, with no commitment to ABIs, APIs, or 
suitability for anything.

Whenver there is consensus that it's time to get a release out soon, we 
make a branch. The branch is in feature freeze for its entire lifetime, 
The branch is named for the stable release it will eventually be. So for 
example, we could branch during the 2.1.x, decide that the feature set 
merits a 2.2 version number, and name the branch 2.2. As long as the 
releases on that branch are considered unstable, releases are still 
labelled 2.1.x. Once the release is deemed good enough for general use 
and the ABI is stable, it gets labelled 2.2.0. Further releases on that 
branch are naturally labelled 2.2.1, 2.2.2, etc.

I think I included all the appropriate details, but the picture is 
probably clearer. Rich Bowen took it, and I moved it to save his 
bandwidth: http://www.apache.org/~manoj/dscn2804.jpg




Re: Branching and release scheduling

2004-11-16 Thread Manoj Kasichainula
On Tue, Nov 16, 2004 at 06:16:18PM -0500, Me at IO wrote:
I think I included all the appropriate details, but the picture is 
probably clearer. Rich Bowen took it, and I moved it to save his 
bandwidth: http://www.apache.org/~manoj/dscn2804.jpg
I missed discussing how APR fits in this scheme. The main dev branch 
would build against reasonably updated HEAD versions of APR. Given group 
consensus, we could stick to tags or branches of APR if needed.

On the stable branch, the tree should be moved from this state to 
supporting a released version of APR.

Roy raised some concerns about not being able to track versions of APR 
with the fixes we need. The APR guys say that everyone with httpd commit 
can get APR commit if needed, so that solves part of the problem, but I 
wonder if those httpd committers can drive APR releases.


Re: Bye bye welcome page

2004-10-08 Thread Manoj Kasichainula
On Wed, Oct 06, 2004 at 01:12:33PM -0400, Joshua Slive wrote:
My opinion is that the shorter message is better because, by the fact 
that it gives no information at all, it is less likely to be 
misinterpreted to mean something that the website owner doesn't intend.
+1, as long as there's no mention of Apache anywhere on the page


Re: Shorten the default config and the distribution (was: IfModule in the Default Config)

2004-09-14 Thread Manoj Kasichainula
On Tue, Sep 14, 2004 at 11:21:15AM -0400, Joshua Slive wrote:
On Tue, 14 Sep 2004, [ISO-8859-15] André Malo wrote:
A 30 KB default config, which nobody outside this circle here
really understands, isn't helpful - especially for beginners.
I agree that the current config file is too big and ugly.  But let's be a 
little careful here.  There needs to be a balance.  Detailed config files 
do help users understand the capabilities of the server and make it much 
easier to activate features.
How about separating the example and default configs? Make the 
default config short, but provide an example config with all the meat 
that can be easily cut-and-pasted. 

For example, I think IfModule lines in a config file are usually a bad 
idea, since the webserver should complain loudly if a needed module 
isn't present, instead of just ignoring the situation. Because the 
example config is serving as a sort of documentation for all these 
modules, but yet still has to work on whatever the user happens to 
build, it has to include those IfModules, and it then encourages 
people to use them in a bad way.

- Can we get rid of the non-unix mpm stuff from the default config.  
(Don't mean to offend os/2 and beos (and possibly netware), but they 
are really superfluous and confusing for most people.)
+1 to your proposal in the meantime (given my proposal above). I'd 
probably just get rid of all of the per-mpm stuff even. The mpms should 
have reasonable default values, so unless the admin needs to make a 
specific change, there should be no need for any of those directives in 
the default config.

- Can we get rid of most of the AddLanguage/AddCharset directives?  
They are a constant source of bug reports, and I really can't imagine 
that many people use them as-is.  (Do people really name their files 
index.html.utf32be.el?)
+1


Re: Please Help

2004-07-20 Thread Manoj Kasichainula
On Tue, Jul 20, 2004 at 09:36:25PM -0400, Jeffrey K Pry wrote:
I sent an email to that address and it failed. Can somebody please tell me
what to do? Thank you all so much.
I just unsubbed the address manually. In the future, please save the 
message you got when subscribing to a list to know how to unsubscribe, 
or look in the headers for unsubscribe instructions.


Re: Move apache-1.3 to Subversion

2004-05-23 Thread Manoj Kasichainula
On Mon, May 17, 2004 at 12:35:13AM +0200, Sander Striker wrote:
There's only one thing for us to decide; how to define the layout
under httpd/ in the SVN repository.
e.g.
 .../
   httpd/
 trunk/
 branches/
   1.3.x/
   2.0.x/
 tags/
   2.0.49/
   ...
   1.3.31/
   ...
Sounds good. We should ponder a way to set up closed branches for 
security patches. Maybe they could be protected on a case-by-case basis, 
or we create a 4th top-level directory security-patches.



Re: ScriptLog

2003-09-08 Thread Manoj Kasichainula
On Sun, Sep 07, 2003 at 05:38:56PM -0400, Cliff Woolley wrote:
 On Sun, 7 Sep 2003, Manoj Kasichainula wrote:
 
  If it's only for debugging, can't CGI writers just add a line to their
  code to rebind stderr to a file?
 
 Only if the error is output from the script as opposed to a compilation
 failure or other interpreter weirdness.

Good point. The other alternative I've pondered is for people
debugging their own CGIs to use a wrapper that rebinds stderr and
execs the original CGI, but I suspect I'll get shouted down at this
point :)


Re: ScriptLog

2003-09-07 Thread Manoj Kasichainula
On Sat, Sep 06, 2003 at 04:57:25PM -0400, Cliff Woolley wrote:
 On Sat, 6 Sep 2003, Astrid Keßler wrote:
 
  +1 for ScriptLog and RewriteLog(Level), although I'm not sure this is
  easy to implement. As I know, all log files are opend at server start.
  Allow directory based logging would mean to open and close log files per
  request.
 
 Yes, it would.  But for a debug log it's a price I'm willing to accept.

If it's only for debugging, can't CGI writers just add a line to their
code to rebind stderr to a file?


Re: Possible security flaw! (Format BUG)

2003-09-01 Thread Manoj Kasichainula
On Sun, Aug 31, 2003 at 06:24:04AM -0300, Ranier Vilela wrote:
 Hello All,
 I tested the source code of httpd-2.0.47, with tool pscan (format bug 
 scanner) and possible
 security flaws is found!
 Please, anybody can check if this is real problem of security?

This kind of vulnerability is only exposed when there is a format string
under the control of an unauthorized user.

It looked like all the format strings in your patches were literals and
aren't controlled by users, so they wouldn't be exploitable.


Re: request for comments: multiple-connections-per-thread MPM design

2002-12-12 Thread Manoj Kasichainula
Took too long to respond. Oh well, no one else did either...

On Tue, Nov 26, 2002 at 01:14:10AM -0500, Glenn wrote:
 On Mon, Nov 25, 2002 at 08:36:59PM -0800, Manoj Kasichainula wrote:
  BTW, ISTR Ryan commenting a while back that cross-thread signalling
  isn't reliable, and it scares me in general, so I'd lean towards the
  pipe.
  
  I'm pondering what else could be done about this; having to muck with a
  pipe doesn't feel like the right thing to do.
 
 Why not?

Good question. I'm still waffling on this.

 Add a descriptor (pipe, socket, whatever) to the pollset and use
 it to indicate the need to generate a new pollset.  The thread that sends
 info down this descriptor could be programmed to wait a short amount of
 time between sending triggers, so as not to cause the select() to return
 too, too often, but short enough not to delay the handling of new
 connections too long.

But what's a good value? Any value picked is going to be too annoying.
0.1 s means delaying lots of threads up to a tenth of a second. And
there would be good reasons for wanting to lower that value, and to not
lower that value. Which would mean it would need to be a tunable
parameter depending on network and CPU characteristics, and needing a
tunable parameter for this just seems ooky. 

But just picking a good value and sticking with it might not be too bad.
The correct thing to do would be to code it up and test, but I'd rather
have a reasonable idea of the chances for success first. :)

In the perfect case, each poll call would return immediately with lots
of file descriptors ready for work, and they would all get farmed out.
Then before the next poll runs, there are more file descriptors ready to
be polled. 

Hmmm, if the poll is waiting on fds for any length of time, it should be
ok to interrupt it, because by definition it's not doing anything else.

So maybe the way to go is to forget about waiting the 0.1 s to interrupt
poll. Just notify it immediately when there's a fd waiting to be polled.
If no other fds have work to provide, we add the new fds to the poll set
and continue.

Otherwise, just run through all the other fds that need handling first,
then pick off all the fds that are waiting for polling and add them to
the fd set.

So (again using terms from my proposal):

submit_ticket would push fds into a queue and write to new_event_pipe if
the queue was empty when pushing.

get_next_event would do something like:

if (previous_poll_fds_remaining) {
pick one off, call event handler for it
}
else {
clean out new_event_queue and put values into new poll set
poll(pollfds, io_timeout);
call event handler for one of the returned pollfds
}

Something was bothering me about this earlier, and I can't remember what
it is. Maybe it's that when the server isn't busy, a single ticket
submission will make 2 threads (the ticket submitter and the thread
holding the poll mutex) do stuff. Maybe even 3 threads since a new
thread could take the poll mutex. But since this is the unbusy case,
it's not quite so bad.




Re: request for comments: multiple-connections-per-thread MPM design

2002-11-25 Thread Manoj Kasichainula
On Sat, Nov 23, 2002 at 06:40:58PM -0800, Brian Pane wrote:
 Here's an outline of my latest thinking on how to build a
 multiple-connections-per-thread MPM for Apache 2.2.  I'm
 eager to hear feedback from others who have been researching
 this topic.

You prodded me into finally writing up a proposal that's been bouncing
around in my head for a while now. That was in a seperate message, this
will be suggestions for your proposal.

 1. Listener thread
   A Listener thread accept(2)s a connection, creates
   a conn_rec for it, and sends it to the Reader thread.

Some (Most?) protocols have the server initiate the protocol
negotatiation instead of the client, so the listener needs to be able to
pass off to the writer thread as well.

 * Limiting the Reader and Writer pools to one thread each will
   simplify the design and implementation.  But will this impair
   our ability to take advantage of lots of CPUs?

I was actually wondering why the reader and writer were seperate
threads.

What gets more complex with a thread pool  1? I know we'd have to add a
mutex around the select+(read|write), but is there something else?

 * Can we eliminate the listener thread?  It would be faster to just
   have the Reader thread include the listen socket(s) in its pollset.
   But if we did that, we'd need some new way to synchronize the
   accept handling among multiple child processes, because we can't
   have the Reader thread blocking on an accept mutex when it has
   existing connections to watch.

You could dispense with the listener thread in the single-process case
and just use an intraprocess mutex around select+(accept|read|write)

 * Is there a more efficient way to interrupt a thread that's
   blocked in a poll call?  That's a crucial step in the Listener-to-
   Reader and Request Processor-to-Writer handoffs.  Writing a byte
   to a pipe requires two extra syscalls (a read and a write) per
   handoff.  Sending a signal to the target thread is the only
   other solution I can think of at the moment, but that's bad
   because the target thread might be in the middle of a read
   or write call, rather than a poll, at the moment when we hit
   it with a signal, so the read or write will fail with EINTR.

For Linux 2.6, file notifications could be done entirely in userland in
the case where no blocking is needed, using futexes.

But if you want to avoid the extra system calls, you could put a mutex
around maintenence of the pollset and just let the various threads dork
with it directly.

I do keep mentioning this mutex around the select/poll :). Is there a
performance reason that you're trying to avoid it? In my past skimmings,
I've seen you post a lot of benchmarks and such, so maybe you've studied
this.

I'm suspicious of signals, but as long as they are tightly controlled
with sigprocmask or pthread_sigmask, I guess they aren't so bad.




Re: request for comments: multiple-connections-per-thread MPM design

2002-11-25 Thread Manoj Kasichainula
On Mon, Nov 25, 2002 at 07:12:43AM -0800, Brian Pane wrote:
 On Mon, 2002-11-25 at 00:20, Manoj Kasichainula wrote:
  I was actually wondering why the reader and writer were seperate
  threads.
 
 It was a combination of several factors that convinced me
 to make them separate:
 * Take advantage of multiple CPUs more easily

Yeah, but as you noticed, once you get more than 2 CPUs, you have the
same problem.

I'm just guessing here, but I imagine most CPU effort wouldn't be
expended in the actual kernel-user transitions that are polls and
non-blocking I/O.  And the meat of those operations could be handled by
other CPUs at the kernel level. So that separation onto multiple
CPUs might not help much.

 * Reduce the number of file descriptors that each poll call
   is handling (important on platforms where we don't have
   an efficient poll mechanism)

Has anyone read or benchmarked whether 2 threads polling 500 fds is
faster than 1 thread polling 1000?

  For Linux 2.6, file notifications could be done entirely in userland in
  the case where no blocking is needed, using futexes.
 
 Thanks!  I'll check out futexes.

Note that futexes are just Fast User mUTEXES. Those are already in the
kernel (according to some threads I read yesterday anyway). But I
beleive the part about file notification using them is still in
discussion.

  But if you want to avoid the extra system calls, you could put a mutex
  around maintenence of the pollset and just let the various threads dork
  with it directly.
  
  I do keep mentioning this mutex around the select/poll :). Is there a
  performance reason that you're trying to avoid it? In my past skimmings,
  I've seen you post a lot of benchmarks and such, so maybe you've studied
  this.
 
 The real reason I don't like the mutex around the poll is that
 it would add too much latency if we had to wait for the current
 poll to complete before adding a new descriptor.  When the
 Listener accepts a new connection, or a Request Processor creates
 a new response brigade, it needs to get the corresponding socket
 added to the pollset immediately, which really requires interrupting
 the current poll.

Hmmm. That's a problem that needs solving even without the mutex though
(and it affects the design I proposed yesterday as well).  When you're
adding a new fd to the reader or writer, you have to write to a pipe or
send a signal. The mutex shouldn't affect that. 

BTW, ISTR Ryan commenting a while back that cross-thread signalling
isn't reliable, and it scares me in general, so I'd lean towards the
pipe.

I'm pondering what else could be done about this; having to muck with a
pipe doesn't feel like the right thing to do. Perhaps I should actually
look at other people's code to see what they do. Other designs have
threads for disk I/O and such, so there should be a way. I believe
Windows doesn't have this problem, or at least hides it better, because
completion ports are independent entities that don't interact with each
other as far as the user is concerned.




Re: Another async I/O proposal [was Re: request for comments: multiple-connections-per-thread MPM design]

2002-11-25 Thread Manoj Kasichainula
On Mon, Nov 25, 2002 at 08:10:12AM -0800, Brian Pane wrote:
 On Mon, 2002-11-25 at 00:02, Manoj Kasichainula wrote:
  while (event = get_next_event())
 add more spare threads if needed
 event_processor = lookup_event_processor(event)
 ticket = event_processor(event)
 if (ticket) submit_ticket(ticket)
 exit loop (and thus end thread) if not needed
  
  The event_processor can take as long as it wants, since there are other
  threads who can wait for the next event.
 
 Where is the locking done?  Is the lock just around the
 get_next_event() call?

Yeah, I imagined the locking would be implicit in there. Different event
mechanisms on various OSes could require different locking schemes, so
if locking is needed, it should be hidden there.

 Once the httpd_request_processor() has created a new ticket for
 the write, how does the submit_ticket() get the socket added into
 the pollset?  If it's possible for another request to be in the
 middle of a poll call at the same time, does submit_ticket()
 interrupt the poll in order to add the new descriptor?

This is a problem I missed somehow. I mentioned it in the other branch
of the thread.

 - Flow control will be difficult.  Here's a tricky scenario I
   just thought of:  The server is configured to run 10 threads.
   Most of the time, it only needs a couple of them, because it's
   serving mostly static content and and occasional PHP request.
   Suddenly, it gets a flood of requests for PHP pages.  The first
   ten of these quickly take over the ten available threads.
   PHP doesn't know how to work in an event-driven world, each
   of these requests holds onto its thread for a long time.  When
   one of them finally completes, it produces some content to be
   written.  But the writes may be starved, because the first
   thread that finishes its PHP request and goes back into the
   select loop might find another incoming request and read it
   before doing any writes.  And if that new request is another
   long-running PHP request, it could be a while before we finally
   get around to doing the write.

Hmm, yeah, this is a concern. One answer is to set a very high
MaxThreadLimit, but then you can't control how many PHP threads you
have. Another answer is to reserve some threads for I/O, which your
design does.

   It's possible to partly work around this by implementing
   get_next_event() so that it completes all pending, unblocked
   writes before returning.  But more generally, we'll need some
   solution to keep long-running, non-event-based requests from
   taking over all the server threads.  (This is true of my design
   as well.)

Actually, in your design, since you have seperate threads for I/O, I
don't see why it would suffer.



Re: request for comments: multiple-connections-per-thread MPM design

2002-11-25 Thread Manoj Kasichainula
On Mon, Nov 25, 2002 at 08:36:59PM -0800, Me at IO wrote:
 I'm just guessing here, but I imagine most CPU effort wouldn't be
 expended in the actual kernel-user transitions that are polls and
 non-blocking I/O.  And the meat of those operations could be handled by
 other CPUs at the kernel level. So that separation onto multiple
 CPUs might not help much.

Eh, I was on crack when I wrote this. You want an I/O thread per CPU
when you can get it.



Re: perchild on FreeBSD 5?

2002-08-14 Thread Manoj Kasichainula

On Wed, Aug 14, 2002 at 10:36:53AM -0700, Brian Pane wrote:
 It's not entire libraries that will have to be mutexed, just
 calls to non-thread-safe functions within libraries.  That
 will reduce the concurrency of the server, but in general
 not so severely that it's only serving one request at a time.

Actually, it depends on the library. You could have multiple functions
in a library that all dork with a common bit of non-thread-local state.

You have to either mutex *all* calls to the library with one big lock,
or examine the library to make sure that it's safe to do less




Re: is httpd a valid way to start Apache?

2002-05-22 Thread Manoj Kasichainula

On Thu, May 16, 2002 at 10:16:56PM -0701, Jos Backus wrote:
 On Thu, May 16, 2002 at 07:27:46PM -0700, Manoj Kasichainula wrote:
  I've (mostly) written replacements for supervise, setuidgid, and
  tcpserver. They use Single Unix APIs, haven't been ported to APR, and
  have no docs yet, but they are working for me.
  
  I imagine porting them to APR wouldn't be too painful, though they
  wouldn't remain the svelte 4-8kB binaries they are today. :)
 
 Interesting. I'm not sure how much benefit there would be from using APR
 though.

Mainly portability to older Unixes that don't support some of the more
modern calls I used (or that break Single Unix in ways that Linux
doesn't). I guess autoconf, etc. would do the job as well.

  Are people interested in this code?
 
 I for one would be interested in seeing this.

http://www.io.com/~manoj/file/mktool-0.0.7.tar.gz

I've only built it on my Linux boxes; I haven't even tried FreeBSD
yet, though I did try to avoid Linuxisms. The supervise replacement is
called babysit (so it wouldn't be confused with the djb tool it
tries to work as).

I'll hopefully clean it up a bit more, and maybe even test and doc it
more next week.

 I may be able to sell this to
 the FreeBSD people for inclusion in the base OS if the license allows it.

That won't be a problem. :) 

 Do you also have equivalents to svc, svstat and svok?

I haven't written svc or svstat replacements yet; I've just used 'echo
-n dx  directory/control' in my scripts in the mean time. But they
will be easy.  There is an svok workalike called bsok in the tarball
though.




Re: is httpd a valid way to start Apache?

2002-05-16 Thread Manoj Kasichainula

On Wed, May 15, 2002 at 12:49:46PM -0701, Jos Backus wrote:
 What about moving into the other direction and moving the process management
 portion into a separate set of tools so it can be used with other daemons
 besides httpd?

I've pondered writing something like this, but then I also ponder the
opposite. Why not build the supervise support into httpd itself. httpd
would listen on a unix-domain socket or FIFO in the filesystem, and
all graceful signals would be replaced with writes to this fifo. This
eliminates the crufty signal layer between the supervise replacement
and httpd, completely hides the difference between Unix and Windows
(using the -k syntax) and potentially allows for more interesting
status reporting from the command-line. Probably not viable for 2.0,
though.

 It would be great to have a BSD-licensed version of
 something like djb's daemontools.

I've (mostly) written replacements for supervise, setuidgid, and
tcpserver. They use Single Unix APIs, haven't been ported to APR, and
have no docs yet, but they are working for me.

I imagine porting them to APR wouldn't be too painful, though they
wouldn't remain the svelte 4-8kB binaries they are today. :)

On Wed, May 15, 2002 at 07:15:47PM -0701, Jos Backus wrote:
 Seriously, a decent process controller that would allow starting, stopping and
 sending various signals to a command that runs as its child (i.e. duplicating
 supervise's functionality) should not be too hard to implement.

Nope, it wasn't. Assuming mine works, anyway.

 It's the
 fiddling with the pipes between two supervise's (the main one and the log/
 one) that seems tricky to me.

I don't think this is a problem, because of the way Unix mangles the
file descriptors. If you do something like:

supervise /var/supervise/producer | supervise /var/supervise/consumer

then, your shell will arrange the fd's so that stdout for producer
points to a pipe to stdin for consumer. Then, when the supervise
processes fork-exec their children, the kids inherit the 2 ends of the
pipe, and just use them as if the pipe were set up just for them. If
the producer dies, there's no problem, because the parent supervise
still keeps an fd for that end of the pipe and passes it on to the
next generation of the producer. Same goes for the consumer. It might
be nice to use a technique like this to take the code for reliable
piped logging out of httpd and make it more generally useful.

 And I'd not sure how you'd wait() for the child
 while still being able to select() on a name pipe in order to read control
 messages sent by svc.

In my supervise replacement, I use a technique picked up from the
Awesome Might that is Dean (and which was used in some of the early
MPMs). Create a pipe, and keep both ends of it. Create a signal
handler for SIGCHLD that writes to the pipe. And, in the select() call
you reference, wait on both the sigchld pipe (a.k.a. pipe_of_death)
and the named fifo.

Are people interested in this code?




Re: is httpd a valid way to start Apache?

2002-05-16 Thread Manoj Kasichainula

On Thu, May 16, 2002 at 05:00:13PM +0200, Jeroen Massar wrote:
 Due to inheritance (export) of environment variables I usually start
 Apache after doing a:
 # for i in `export | cut -f3 -d' '|cut -f1 -d'='`; do export -n $i; done
 Which cleans them all up nicely.

You can also do this by running httpd under env -, which I believe
is quite portable. man env(1) for details.



mod_unique_id failure mode

2002-02-21 Thread Manoj Kasichainula

apache.org DNS was down today; it's back up now. But when DNS was back
up, we found that the web server was down, and saw this in the error
log:

[Thu Feb 21 00:00:03 2002] [alert] (22007)No address associated with hostname: 
mod_unique_id: unable to find IPv4 address of daedalus.apache.org
Configuration Failed

Should mod_unique_id's failure have prevented the server from parsing
the configuration and restarting? I can see an argument both ways for
this. I guess it depends on whether you think it's more important to
keep the server running, or to be sure that all the features are
working if the server does (re)start.




daedalus httpd is upset

2002-01-28 Thread Manoj Kasichainula

daedalus's httpd is only occasionally answering requests. Something
weeeird is going on with it.

pstree for httpd looks like:

init-+-cron
 |-cvsupd
 |-9*[getty]
 |-64*[httpd]

It looks like the parent process died.

Also, truss sometimes hangs, sometimes records successful requests,
and sometimes produces sendfile errors:

sendfile(0x9,0x8,0x1c8000,0x0,0x1d47ef,0xbfbfd35c,0xbfbfd354,0x0) ERR#35 'Resource 
temporarily
 unavailable'
sendfile(0x9,0x8,0x1c9000,0x0,0x1d37ef,0xbfbfd35c,0xbfbfd354,0x0) ERR#35 'Resource 
temporarily
 unavailable'
select(0x9,0x0,0xbfbfd29c,0x0,0xbfbfd294)= 1 (0x1)
sendfile(0x9,0x8,0x1c9000,0x0,0x1d37ef,0xbfbfd35c,0xbfbfd354,0x0) ERR#35 'Resource 
temporarily
 unavailable'
sendfile(0x9,0x8,0x1ca000,0x0,0x1d27ef,0xbfbfd35c,0xbfbfd354,0x0) ERR#35 'Resource 
temporarily
 unavailable'
select(0x9,0x0,0xbfbfd29c,0x0,0xbfbfd294)= 1 (0x1)
sendfile(0x9,0x8,0x1ca000,0x0,0x1d27ef,0xbfbfd35c,0xbfbfd354,0x0) ERR#35 'Resource 
temporarily
 unavailable'
sendfile(0x9,0x8,0x1cb000,0x0,0x1d17ef,0xbfbfd35c,0xbfbfd354,0x0) ERR#35 'Resource 
temporarily
 unavailable'
select(0x9,0x0,0xbfbfd29c,0x0,0xbfbfd294)= 1 (0x1)
sendfile(0x9,0x8,0x1cb000,0x0,0x1d17ef,0xbfbfd35c,0xbfbfd354,0x0) ERR#35 'Resource 
temporarily
 unavailable'
sendfile(0x9,0x8,0x1cc000,0x0,0x1d07ef,0xbfbfd35c,0xbfbfd354,0x0) ERR#35 'Resource 
temporarily unavailable'

I'll leave it alone for an hour or two and then restart it unless
someone volunteers to investigate this.