On Mon, 24 Jun 2002, Brian Pane wrote:

> On Mon, 2002-06-24 at 02:16, Andi Gutmans wrote:
> 
> > >   * PHP's nonbuffered output mode produces very small socket writes
> > >     with Apache 2.0.  With 1.3, the httpd's own output buffering
> > >     alleviated the problem.  In 2.0, where the PHP module splits
> > >     long blocks of static text into 400-byte segments and inserts
> > >     a flush bucket after every bucket of data that it sends to the
> > >     next filter, the result is a stream of rather small packets.
> > 
> > You should test this with PHP's internal output buffering enabled. You can
> > set it there to something like 4096.
> 
> That definitely will improve the numbers, but I'd rather not spend the
> next few years saying "turn on buffering in mod_php" every time another
> user posts a benchmark claiming that "Apache 2.0 sucks because it runs
> my PHP scripts ten times slower than 1.3 did." :-)
> 
> I have two proposals for this:
> 
> * Saying "turn on buffering" is, IMHO, a reasonable solution if you
>   can make buffering the default in PHP under httpd-2.0.  Otherwise,
>   you'll surprise a lot of users who have been running with the default
>   non-buffered output using 1.3 and find that all their applications
>   are far slower with 2.0.

It is the default in the recommended INI file.
 
> * A better solution, though, would be to have the PHP filter generate
>   flush buckets (in nonbuffered mode) only when it reaches a "<%" or
>   "%>".  I.e., if the input file has 20KB of static text before the
>   first embedded script, send that entire 20KB in a bucket, and don't
>   try to split it into 400-byte segments.  If mod_php is in nonbuffered
>   mode, send an apr_bucket_flush right after it.  (There's a precedent
>   for this approach: one of the ways in  which we managed to get good
>   performance from mod_include in 2.0 was to stop trying to split large
>   static blocks into small chunks.  We were originally concerned about
>   the amount of time it would take for the mod_include lexer to run
>   through large blocks of static content, but it hasn't been a problem
>   in practice.)

>From tests we did a long time ago making the lexer queue 20KB of buffer
just to send it out in one piece was actually a problem. I think the
buffer needed reallocating which made it really slow but it was about 3
years ago so I can't remember the exact reason we limited it to 400. I'm
pretty sure it was a good reason though :)

I bet that the performance difference Rasmus is describing is not really
due to PHP's added output buffering.

Andi



Reply via email to