Hi Pierre! On 05/02/15 18:49, Pierre Joye wrote: > > On Feb 5, 2015 3:17 PM, "Michael Wallner" <m...@php.net > <mailto:m...@php.net>> wrote: >> >> Compare the timings accessing google 20 times sequentually: >> >> With default of raphf.persistent_handle.limit=-1 (unlimited): >> █ mike@smugmug:~$ time php -r 'for ($i=0;$i<20;++$i) {(new >> http\Client("curl","google"))->enqueue(new http\Client\Request("GET", >> "http://www.google.at/"))->send();}' >> >> 0.03s user 0.01s system 2% cpu 1.530 total >> >> >> With raphf effectively disabled: >> █ mike@smugmug:~$ time php -d raphf.persistent_handle.limit=0 -r 'for >> ($i=0;$i<20;++$i) {(new http\Client("curl","google"))->enqueue(new >> http\Client\Request("GET", "http://www.google.at/"))->send();}' >> >> 0.04s user 0.01s system 1% cpu 2.790 total > > While I like the idea, I would not take it as it. Many things could > affect it and I am not sure the persistent resource is what spare times. > Any profiling info with delta? >
Does the following kcachegrind screenshot give an idea (I used a minimum node cost of 10% to simplify the graph)? Left is raphf enabled (24M Ir) and on the right raphf disabled (35M Ir): http://dev.iworks.at/ext-http/raphf.png Have a look on the top-most far-right highlighted block, which is solely devoted to tearing up curl instances when raphf is disabled. -- Regards, Mike -- PHP Internals - PHP Runtime Development Mailing List To unsubscribe, visit: http://www.php.net/unsub.php