On Thu, Apr 15, 2010 at 12:09 AM, Ferenc Kovacs <tyr...@gmail.com> wrote:
> My suggestion is more about releasing the allocated memory as soon as > possible. That is, this option is similar to "max_requests". > PHP-FPM would kill the PHP process if the requests a process handled > exceed max_requests, and similarly, PHP-FPM should kill the PHP > process whose memory usage exceeds "exit_on_memory_exceeds". > > So one of your lib (for example imagick) leaks memory, on the long run, it > will exhaust the memory limit, and will kill a totaly request. > You can set that how many request should be served with one worker, but you > can't soft limit it's memory consumption. > This is what the patch does: > if you set the hard limit: (memory_limit) you can guarante that no process > will use more memory, because if it tries, it will fail. > and you can set soft limit, if that reached, the process will die and > respawn after finishing the current request. Sounds like you more or less want a "request_terminate_timeout" type of functionality but based on memory. Since set_time_limit() and other things in PHP don't seem to force kill the process. So PHP-FPM forcefully terminates the process based on the actual wall clock seconds. I'm thinking you're hoping for the same thing to be possible but for memory limits per-process? I would say that could be cool, either per pool or per child somehow. No clue if it is possible, but that would be a great way to limit the usage *forcefully* - helpful on lower resource machines (like a vps...) -- PHP Internals - PHP Runtime Development Mailing List To unsubscribe, visit: http://www.php.net/unsub.php