Richard Lynch wrote:
On Wed, October 25, 2006 11:58 am, [EMAIL PROTECTED] wrote:
Are the include files only compiled when execution hits them, or are
all
include files compiled when the script is first compiled, which would
mean a cascade through all statically linked include files. By
statically linked files I mean ones like "include ('bob.php')" - i.e
the
filename isn't in a variable.

As far as I know, the files are only loaded as execution hits them.

If your code contains:

<?php
  if (0){
    require 'foo.inc';
  }
?>

Then foo.inc will never ever be read from the hard drive.

You realize you could have tested this in less time than it took you
to post, right?.
:-)
I don't know the extent to which the engine optimises performance, and I know very little about how different versions of the engine deal with the issue, but I guessed it depended on the behaviour of the engine, cache and maybe optimiser, and know/knew I don't know enough... My thinking: for dynamically linked files, one speed optimisation is to load the file before it's needed, at the expense of memory, while continuing execution of the loaded portions. It's easy to do if the filename is static. The code may never be executed, but still takes up space. Some data structures can also be pre-loaded in this way.


Are included files ever unloaded? For instance if I had 3 include
files
and no loops, once execution had passed from the first include file to
the second, the engine might be able to unload the first file. Or at
least the code, if not the data.

I doubt that the code is unloaded -- What if you called a function
from the first file while you were in the second?
I agree it's unlikely, but it's feasible if coded is loaded whenever required. Especially if data and code are separated by the engine, and that's quite likely because of the garbage collection.

Thirdly, I understand that when a request arrives, the script it
requests is compiled before execution. Now suppose a second request
arrives for the same script, from a different requester, am I right in
assuming that the uncompiled form is loaded? I.e the script is
tokenized
for each request, and the compiled version is not loaded unless you
have
engine level caching installed - e.g. MMCache or Zend Optimiser.

You are correct.

The Caching systems such as Zend Cache (not the Optimizer), MMCache,
APC, etc are expressly designed to store the tokenized version of the
PHP script to be executed.

Note that their REAL performance savings is actually in loading from
the hard drive into RAM, not actually the PHP tokenization.

Skipping a hard drive seek and read is probably at least 95% of the
savings, even in the longest real-world scripts.

The tokenizer/compiler thingie is basically easy chump change they
didn't want to leave on the table, rather than the bulk of the
performance "win".

I'm sure somebody out there has perfectly reasonable million-line PHP
script for a valid reason that the tokenization is more than 5% of the
savings, but that's going to be a real rarity.

Thanks - that's really useful - I didn't realise that the bulk of the saving wasn't in tokenising.
Fourthly, am I right in understanding that scripts do NOT share
memory,
even for the portions that are simply instructions? That is, when the
second request arrives, the script is loaded again in full. (As
opposed
to each request sharing the executed/compiled code, but holding data
separately.)

Yes, without a cache, each HTTP request will load a "different" script.
Do you know if, when a cache is used, whether requests in the same thread use the same in-memory object. I.e. Is the script persistent in the thread?

Fifthly, if a script takes 4MB, given point 4, does the webserver
demand
8MB if it is simultaneously servicing 2 requests?

If you have a PHP script that is 4M in length, you've done something
horribly wrong. :-)
Sort of. I'm using Drupal with lots of modules loaded. PHP memory_limit is set to 20MB, and at times 20MB is used. I think that works per request. All the evidence points to that. So 10 concurrent requests, which is not unrealistic, it could use 400MB + webserver overhead. And I still want to combine it with another bit of software that will use 10 to 15MB per request. It's time to think about memory usage and whether there are any strategies to disengage memory usage from request rate.


Of course, if it loads a 4M image file, then, yes, 2 at once needs 8M
etc.

Lastly, are there differences in these behaviors for PHP4 and PHP5?

I doubt it.

I think APC is maybe going to be installed by default in PHP6 or
something like that, but I dunno if it will be "on" by default or
not...

At any rate, not from 4 to 5.


Thanks.

Note that if you NEED a monster body of code to be resident, you can
prototype it in simple PHP, port it to C, and have it be a PHP
extension.
A good idea, but not feasible in this situation.

Thank you.

Jeff

This should be relatively easy to do, if you plan fairly carefully.

If a village idiot like me can write a PHP extension (albeit a
dirt-simple one) then anybody can. :-)


--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to