On Thu, October 26, 2006 8:28 pm, [EMAIL PROTECTED] wrote:
> Richard Lynch wrote:
>> On Wed, October 25, 2006 11:58 am, [EMAIL PROTECTED] wrote:
>>> Are included files ever unloaded? For instance if I had 3 include
>>> files
>>> and no loops, once execution had passed from the first include file
>>> to
>>> the second, the engine might be able to unload the first file. Or
>>> at
>>> least the code, if not the data.
>>>
>>
>> I doubt that the code is unloaded -- What if you called a function
>> from the first file while you were in the second?
>>
> I agree it's unlikely, but it's feasible if coded is loaded whenever
> required. Especially if data and code are separated by the engine, and
> that's quite likely because of the garbage collection.

It's really not that practical to do that separation and unload code
objects, when the user could call any function at any time.

Particularly with:
http://php.net/eval
$function = 'myprint';
function myprint ($foo) { print $foo; }
$function('Hello World');

> Thanks - that's really useful - I didn't realise that the bulk of the
> saving wasn't in tokenising.

It is a very common misconception.

I think MOST people actually get this "wrong" on their first encounter
of a code cache.

>> Yes, without a cache, each HTTP request will load a "different"
>> script.
>>
> Do you know if, when a cache is used, whether requests in the same
> thread use the same in-memory object. I.e. Is the script persistent in
> the thread?

Almost for sure, the in-memory tokenized version is not only shared
within a thread, but across all threads.

Otherwise, your cache would be loading hundreds of copies of each
script for all the Apache children.

The dispatcher may "copy" the tokenized script in order to run it with
a clean slate, but the "source" it uses is probably shared RAM.

At least, so I presume...

>>> Fifthly, if a script takes 4MB, given point 4, does the webserver
>>> demand
>>> 8MB if it is simultaneously servicing 2 requests?
>>>
>>
>> If you have a PHP script that is 4M in length, you've done something
>> horribly wrong. :-)
>>
> Sort of. I'm using Drupal with lots of modules loaded. PHP
> memory_limit
> is set to 20MB, and at times 20MB is used. I think that works per
> request. All the evidence points to that. So 10 concurrent requests,
> which is not unrealistic, it could use 400MB + webserver overhead. And
> I
> still want to combine it with another bit of software that will use 10
> to 15MB per request. It's time to think about memory usage and whether
> there are any strategies to disengage memory usage from request rate.

Almost for sure, that 20MB is never actually about the Drupal code
itself.

Loading in and manipulating an IMAGE using GD, or sucking down a
monster record-center from a database, or slurping down a bloated
web-page for web-scraping and converting that to XML nodes and ...

It's not the CODE that is chewing up the bulk of your 20MB.  It's the
data.

I may rant about OOP code-bloat, but I don't think it's THAT bad :-)

-- 
Some people have a "gift" link here.
Know what I want?
I want you to buy a CD from some starving artist.
http://cdbaby.com/browse/from/lynch
Yeah, I get a buck. So?

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to