Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-02-05 Thread Hajo Locke

Hello Luca,

Am 05.02.2018 um 02:27 schrieb Luca Toscano:

Hi Hajo,

2018-02-01 3:58 GMT+01:00 Luca Toscano >:


Hi Hajo,

2018-01-31 2:37 GMT-08:00 Hajo Locke >:

Hello,


Am 22.01.2018 um 11:54 schrieb Hajo Locke:

Hello,

Am 19.01.2018 um 15:48 schrieb Luca Toscano:

Hi Hajo,

2018-01-19 13:23 GMT+01:00 Hajo Locke >:

Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was
successfully done within only a few seconds.

finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 10 total, 10 started, 10 done,
10 succeeded, 0 failed, 0 errored, 0 timeout

so problem seems to be not h2load and basic apache. may
be i should look deeper into proxy_fcgi configuration.
php-fpm configuration is unchanged and was successfully
used with classical fastcgi-benchmark, so i think i have
to doublecheck the proxy.

now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 10 total, 10 started, 10 done,
10 succeeded, 0 failed, 0 errored, 0 timeout

iam surprised by that. i expected a higher performance
when reusing backend connections rather then creating
new ones.
I did some further tests and changed some other
php-fpm/proxy values, but once "enablereuse=on" is set,
the problem returns.

Should i just run the proxy with enablereuse=off? Or do
you have an other suspicion?



Before giving up I'd check two things:

1) That the same results happen with a regular localhost
socket rather than a unix one.

I changed my setup to use tcp-sockets in php-fpm and
proxy-fcgi. Currently i see the same behaviour.

2) What changes on the php-fpm side. Are there more busy
workers when enablereuse is set to on? I am wondering how
php-fpm handles FCGI requests happening on the same socket,
as opposed to assuming that 1 connection == 1 FCGI request.

If "enablereuse=off" is set i see a lot of running
php-workerprocesses (120-130) and high load. Behaviour is
like expected.
When set "enablereuse=on" i can see a big change. number of
running php-workers is really low (~40). The test is running
some time and then it stucks.
I can see that php-fpm processes are still active and waiting
for connections, but proxy_fcgi is not using them nor it is
establishing new connections. loadavg is low and
benchmarktest is not able to finalize.

I did some further tests to solve this issue. I set ttl=1 for
this Proxy and achieved good performance and high number of
working childs. But this is paradoxical.
proxy_fcgi knows about inactive connection to kill it, but not
reenable this connection for working.
May be this is helpful to others.

May be a kind of communicationproblem and checking
health/busy status of php-processes.
Whole proxy configuration is  this:


    ProxySet enablereuse=off flushpackets=On timeout=3600
max=15000


   SetHandler "proxy:fcgi://php70fpm"




Thanks a lot for following up and reporting these interesting
results! Yann opened a thread[1] on dev@ to discuss the issue,
let's follow up in there so we don't keep two conversations open.

Luca

[1]:

https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c49aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E





reporting in here what I think it is happening in your test 
environment when enablereuse is set to on. Recap of your settings:


/etc/apache2/conf.d/limits.conf
StartServers          10
MaxClients          500
MinSpareThreads      450
MaxSpareThreads      500
ThreadsPerChild      150
MaxRequestsPerChild   0
Serverlimit 500


    ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500


   SetHandler "proxy:fcgi://php70fpm/"


request_terminate_timeout = 7200
listen = /dev/shm/php70fpm.sock
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000

By default mod_proxy allows a connection pool of ThreadsPerChild 
connections to the backend for each httpd process, meanwhile in your 
case you 

Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-02-04 Thread Luca Toscano
2018-02-05 2:41 GMT+01:00 Eric Covener :

> On Sun, Feb 4, 2018 at 8:27 PM, Luca Toscano 
> wrote:
> > Hi Hajo,
> >
> >
> > 2018-02-01 3:58 GMT+01:00 Luca Toscano :
> >>
> >> Hi Hajo,
> >>
> >> 2018-01-31 2:37 GMT-08:00 Hajo Locke :
> >>>
> >>> Hello,
> >>>
> >>>
> >>> Am 22.01.2018 um 11:54 schrieb Hajo Locke:
> >>>
> >>> Hello,
> >>>
> >>> Am 19.01.2018 um 15:48 schrieb Luca Toscano:
> >>>
> >>> Hi Hajo,
> >>>
> >>> 2018-01-19 13:23 GMT+01:00 Hajo Locke :
> 
>  Hello,
> 
>  thanks Daniel and Stefan. This is a good point.
>  I did the test with a static file and this test was successfully done
>  within only a few seconds.
> 
>  finished in 20.06s, 4984.80 req/s, 1.27GB/s
>  requests: 10 total, 10 started, 10 done, 10
> succeeded, 0
>  failed, 0 errored, 0 timeout
> 
>  so problem seems to be not h2load and basic apache. may be i should
> look
>  deeper into proxy_fcgi configuration.
>  php-fpm configuration is unchanged and was successfully used with
>  classical fastcgi-benchmark, so i think i have to doublecheck the
> proxy.
> 
>  now i did this change in proxy:
> 
>  from
>  enablereuse=on
>  to
>  enablereuse=off
> 
>  this change leads to a working h2load testrun:
>  finished in 51.74s, 1932.87 req/s, 216.05MB/s
>  requests: 10 total, 10 started, 10 done, 10
> succeeded, 0
>  failed, 0 errored, 0 timeout
> 
>  iam surprised by that. i expected a higher performance when reusing
>  backend connections rather then creating new ones.
>  I did some further tests and changed some other php-fpm/proxy values,
>  but once "enablereuse=on" is set, the problem returns.
> 
>  Should i just run the proxy with enablereuse=off? Or do you have an
>  other suspicion?
> >>>
> >>>
> >>>
> >>> Before giving up I'd check two things:
> >>>
> >>> 1) That the same results happen with a regular localhost socket rather
> >>> than a unix one.
> >>>
> >>> I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi.
> >>> Currently i see the same behaviour.
> >>>
> >>> 2) What changes on the php-fpm side. Are there more busy workers when
> >>> enablereuse is set to on? I am wondering how php-fpm handles FCGI
> requests
> >>> happening on the same socket, as opposed to assuming that 1 connection
> == 1
> >>> FCGI request.
> >>>
> >>> If "enablereuse=off" is set i see a lot of running php-workerprocesses
> >>> (120-130) and high load. Behaviour is like expected.
> >>> When set "enablereuse=on" i can see a big change. number of running
> >>> php-workers is really low (~40). The test is running some time and
> then it
> >>> stucks.
> >>> I can see that php-fpm processes are still active and waiting for
> >>> connections, but proxy_fcgi is not using them nor it is establishing
> new
> >>> connections. loadavg is low and benchmarktest is not able to finalize.
> >>>
> >>> I did some further tests to solve this issue. I set ttl=1 for this
> Proxy
> >>> and achieved good performance and high number of working childs. But
> this is
> >>> paradoxical.
> >>> proxy_fcgi knows about inactive connection to kill it, but not reenable
> >>> this connection for working.
> >>> May be this is helpful to others.
> >>>
> >>> May be a kind of communicationproblem and checking health/busy status
> of
> >>> php-processes.
> >>> Whole proxy configuration is  this:
> >>>
> >>> 
> >>> ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000
> >>> 
> >>> 
> >>>SetHandler "proxy:fcgi://php70fpm"
> >>> 
> >>
> >>
> >> Thanks a lot for following up and reporting these interesting results!
> >> Yann opened a thread[1] on dev@ to discuss the issue, let's follow up
> in
> >> there so we don't keep two conversations open.
> >>
> >> Luca
> >>
> >> [1]:
> >> https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c4
> 9aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E
> >>
> >
> > reporting in here what I think it is happening in your test environment
> when
> > enablereuse is set to on. Recap of your settings:
> >
> > /etc/apache2/conf.d/limits.conf
> > StartServers  10
> > MaxClients  500
> > MinSpareThreads  450
> > MaxSpareThreads  500
> > ThreadsPerChild  150
> > MaxRequestsPerChild   0
> > Serverlimit 500
>
> >
> > 
> > ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500
> > 
> > 
> >SetHandler "proxy:fcgi://php70fpm/"
> > 
> >
> > request_terminate_timeout = 7200
> > listen = /dev/shm/php70fpm.sock
> > pm = ondemand
> > pm.max_children = 500
> > pm.max_requests = 2000
> >
> > By default mod_proxy allows a connection pool of ThreadsPerChild
> connections
> > to the backend for each httpd process, meanwhile in your case you have
> set
> > 3200 using the 'max' parameter (as stated in the docs it is a per process
> 

Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-02-04 Thread Eric Covener
On Sun, Feb 4, 2018 at 8:27 PM, Luca Toscano  wrote:
> Hi Hajo,
>
>
> 2018-02-01 3:58 GMT+01:00 Luca Toscano :
>>
>> Hi Hajo,
>>
>> 2018-01-31 2:37 GMT-08:00 Hajo Locke :
>>>
>>> Hello,
>>>
>>>
>>> Am 22.01.2018 um 11:54 schrieb Hajo Locke:
>>>
>>> Hello,
>>>
>>> Am 19.01.2018 um 15:48 schrieb Luca Toscano:
>>>
>>> Hi Hajo,
>>>
>>> 2018-01-19 13:23 GMT+01:00 Hajo Locke :

 Hello,

 thanks Daniel and Stefan. This is a good point.
 I did the test with a static file and this test was successfully done
 within only a few seconds.

 finished in 20.06s, 4984.80 req/s, 1.27GB/s
 requests: 10 total, 10 started, 10 done, 10 succeeded, 0
 failed, 0 errored, 0 timeout

 so problem seems to be not h2load and basic apache. may be i should look
 deeper into proxy_fcgi configuration.
 php-fpm configuration is unchanged and was successfully used with
 classical fastcgi-benchmark, so i think i have to doublecheck the proxy.

 now i did this change in proxy:

 from
 enablereuse=on
 to
 enablereuse=off

 this change leads to a working h2load testrun:
 finished in 51.74s, 1932.87 req/s, 216.05MB/s
 requests: 10 total, 10 started, 10 done, 10 succeeded, 0
 failed, 0 errored, 0 timeout

 iam surprised by that. i expected a higher performance when reusing
 backend connections rather then creating new ones.
 I did some further tests and changed some other php-fpm/proxy values,
 but once "enablereuse=on" is set, the problem returns.

 Should i just run the proxy with enablereuse=off? Or do you have an
 other suspicion?
>>>
>>>
>>>
>>> Before giving up I'd check two things:
>>>
>>> 1) That the same results happen with a regular localhost socket rather
>>> than a unix one.
>>>
>>> I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi.
>>> Currently i see the same behaviour.
>>>
>>> 2) What changes on the php-fpm side. Are there more busy workers when
>>> enablereuse is set to on? I am wondering how php-fpm handles FCGI requests
>>> happening on the same socket, as opposed to assuming that 1 connection == 1
>>> FCGI request.
>>>
>>> If "enablereuse=off" is set i see a lot of running php-workerprocesses
>>> (120-130) and high load. Behaviour is like expected.
>>> When set "enablereuse=on" i can see a big change. number of running
>>> php-workers is really low (~40). The test is running some time and then it
>>> stucks.
>>> I can see that php-fpm processes are still active and waiting for
>>> connections, but proxy_fcgi is not using them nor it is establishing new
>>> connections. loadavg is low and benchmarktest is not able to finalize.
>>>
>>> I did some further tests to solve this issue. I set ttl=1 for this Proxy
>>> and achieved good performance and high number of working childs. But this is
>>> paradoxical.
>>> proxy_fcgi knows about inactive connection to kill it, but not reenable
>>> this connection for working.
>>> May be this is helpful to others.
>>>
>>> May be a kind of communicationproblem and checking health/busy status of
>>> php-processes.
>>> Whole proxy configuration is  this:
>>>
>>> 
>>> ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000
>>> 
>>> 
>>>SetHandler "proxy:fcgi://php70fpm"
>>> 
>>
>>
>> Thanks a lot for following up and reporting these interesting results!
>> Yann opened a thread[1] on dev@ to discuss the issue, let's follow up in
>> there so we don't keep two conversations open.
>>
>> Luca
>>
>> [1]:
>> https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c49aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E
>>
>
> reporting in here what I think it is happening in your test environment when
> enablereuse is set to on. Recap of your settings:
>
> /etc/apache2/conf.d/limits.conf
> StartServers  10
> MaxClients  500
> MinSpareThreads  450
> MaxSpareThreads  500
> ThreadsPerChild  150
> MaxRequestsPerChild   0
> Serverlimit 500

>
> 
> ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500
> 
> 
>SetHandler "proxy:fcgi://php70fpm/"
> 
>
> request_terminate_timeout = 7200
> listen = /dev/shm/php70fpm.sock
> pm = ondemand
> pm.max_children = 500
> pm.max_requests = 2000
>
> By default mod_proxy allows a connection pool of ThreadsPerChild connections
> to the backend for each httpd process, meanwhile in your case you have set
> 3200 using the 'max' parameter (as stated in the docs it is a per process
> setting, not a overall one). PHP-FPM handles one connection for each worker
> at the time, and your settings allow a maximum of 500 processes, therefore a
> maximum of 500 connections established at the same time from httpd. When
> connection reuse is set to on, the side effect is that for each mod_proxy's
> open/established connection in the pool there will be one 

Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-02-04 Thread Luca Toscano
Hi Hajo,

2018-02-01 3:58 GMT+01:00 Luca Toscano :

> Hi Hajo,
>
> 2018-01-31 2:37 GMT-08:00 Hajo Locke :
>
>> Hello,
>>
>>
>> Am 22.01.2018 um 11:54 schrieb Hajo Locke:
>>
>> Hello,
>>
>> Am 19.01.2018 um 15:48 schrieb Luca Toscano:
>>
>> Hi Hajo,
>>
>> 2018-01-19 13:23 GMT+01:00 Hajo Locke :
>>
>>> Hello,
>>>
>>> thanks Daniel and Stefan. This is a good point.
>>> I did the test with a static file and this test was successfully done
>>> within only a few seconds.
>>>
>>> finished in 20.06s, 4984.80 req/s, 1.27GB/s
>>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
>>> failed, 0 errored, 0 timeout
>>>
>>> so problem seems to be not h2load and basic apache. may be i should look
>>> deeper into proxy_fcgi configuration.
>>> php-fpm configuration is unchanged and was successfully used with
>>> classical fastcgi-benchmark, so i think i have to doublecheck the proxy.
>>>
>>> now i did this change in proxy:
>>>
>>> from
>>> enablereuse=on
>>> to
>>> enablereuse=off
>>>
>>> this change leads to a working h2load testrun:
>>> finished in 51.74s, 1932.87 req/s, 216.05MB/s
>>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
>>> failed, 0 errored, 0 timeout
>>>
>>> iam surprised by that. i expected a higher performance when reusing
>>> backend connections rather then creating new ones.
>>> I did some further tests and changed some other php-fpm/proxy values,
>>> but once "enablereuse=on" is set, the problem returns.
>>>
>>> Should i just run the proxy with enablereuse=off? Or do you have an
>>> other suspicion?
>>>
>>
>>
>> Before giving up I'd check two things:
>>
>> 1) That the same results happen with a regular localhost socket rather
>> than a unix one.
>>
>> I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi.
>> Currently i see the same behaviour.
>>
>> 2) What changes on the php-fpm side. Are there more busy workers when
>> enablereuse is set to on? I am wondering how php-fpm handles FCGI requests
>> happening on the same socket, as opposed to assuming that 1 connection == 1
>> FCGI request.
>>
>> If "enablereuse=off" is set i see a lot of running php-workerprocesses
>> (120-130) and high load. Behaviour is like expected.
>> When set "enablereuse=on" i can see a big change. number of running
>> php-workers is really low (~40). The test is running some time and then it
>> stucks.
>> I can see that php-fpm processes are still active and waiting for
>> connections, but proxy_fcgi is not using them nor it is establishing new
>> connections. loadavg is low and benchmarktest is not able to finalize.
>>
>> I did some further tests to solve this issue. I set ttl=1 for this Proxy
>> and achieved good performance and high number of working childs. But this
>> is paradoxical.
>> proxy_fcgi knows about inactive connection to kill it, but not reenable
>> this connection for working.
>> May be this is helpful to others.
>>
>> May be a kind of communicationproblem and checking health/busy status of
>> php-processes.
>> Whole proxy configuration is  this:
>>
>> 
>> ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000
>> 
>> 
>>SetHandler "proxy:fcgi://php70fpm"
>> 
>>
>>
> Thanks a lot for following up and reporting these interesting results!
> Yann opened a thread[1] on dev@ to discuss the issue, let's follow up in
> there so we don't keep two conversations open.
>
> Luca
>
> [1]: https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c4
> 9aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E
>
>
reporting in here what I think it is happening in your test environment
when enablereuse is set to on. Recap of your settings:

/etc/apache2/conf.d/limits.conf
StartServers  10
MaxClients  500
MinSpareThreads  450
MaxSpareThreads  500
ThreadsPerChild  150
MaxRequestsPerChild   0
Serverlimit 500


ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500


   SetHandler "proxy:fcgi://php70fpm/"


request_terminate_timeout = 7200
listen = /dev/shm/php70fpm.sock
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000

By default mod_proxy allows a connection pool of ThreadsPerChild
connections to the backend for each httpd process, meanwhile in your case
you have set 3200 using the 'max' parameter (as stated in the docs it is a
per process setting, not a overall one). PHP-FPM handles one connection for
each worker at the time, and your settings allow a maximum of 500
processes, therefore a maximum of 500 connections established at the same
time from httpd. When connection reuse is set to on, the side effect is
that for each mod_proxy's open/established connection in the pool there
will be one PHP-FPM worker tight to it, even if not serving any request
(waiting for one basically). This can lead to a situation in which all
PHP-FPM workers are "busy" not allowing mod_proxy to create more
connections (even if it is set/allowed to do so) leading to a 

Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-31 Thread Luca Toscano
Hi Hajo,

2018-01-31 2:37 GMT-08:00 Hajo Locke :

> Hello,
>
>
> Am 22.01.2018 um 11:54 schrieb Hajo Locke:
>
> Hello,
>
> Am 19.01.2018 um 15:48 schrieb Luca Toscano:
>
> Hi Hajo,
>
> 2018-01-19 13:23 GMT+01:00 Hajo Locke :
>
>> Hello,
>>
>> thanks Daniel and Stefan. This is a good point.
>> I did the test with a static file and this test was successfully done
>> within only a few seconds.
>>
>> finished in 20.06s, 4984.80 req/s, 1.27GB/s
>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
>> failed, 0 errored, 0 timeout
>>
>> so problem seems to be not h2load and basic apache. may be i should look
>> deeper into proxy_fcgi configuration.
>> php-fpm configuration is unchanged and was successfully used with
>> classical fastcgi-benchmark, so i think i have to doublecheck the proxy.
>>
>> now i did this change in proxy:
>>
>> from
>> enablereuse=on
>> to
>> enablereuse=off
>>
>> this change leads to a working h2load testrun:
>> finished in 51.74s, 1932.87 req/s, 216.05MB/s
>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
>> failed, 0 errored, 0 timeout
>>
>> iam surprised by that. i expected a higher performance when reusing
>> backend connections rather then creating new ones.
>> I did some further tests and changed some other php-fpm/proxy values, but
>> once "enablereuse=on" is set, the problem returns.
>>
>> Should i just run the proxy with enablereuse=off? Or do you have an other
>> suspicion?
>>
>
>
> Before giving up I'd check two things:
>
> 1) That the same results happen with a regular localhost socket rather
> than a unix one.
>
> I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi. Currently
> i see the same behaviour.
>
> 2) What changes on the php-fpm side. Are there more busy workers when
> enablereuse is set to on? I am wondering how php-fpm handles FCGI requests
> happening on the same socket, as opposed to assuming that 1 connection == 1
> FCGI request.
>
> If "enablereuse=off" is set i see a lot of running php-workerprocesses
> (120-130) and high load. Behaviour is like expected.
> When set "enablereuse=on" i can see a big change. number of running
> php-workers is really low (~40). The test is running some time and then it
> stucks.
> I can see that php-fpm processes are still active and waiting for
> connections, but proxy_fcgi is not using them nor it is establishing new
> connections. loadavg is low and benchmarktest is not able to finalize.
>
> I did some further tests to solve this issue. I set ttl=1 for this Proxy
> and achieved good performance and high number of working childs. But this
> is paradoxical.
> proxy_fcgi knows about inactive connection to kill it, but not reenable
> this connection for working.
> May be this is helpful to others.
>
> May be a kind of communicationproblem and checking health/busy status of
> php-processes.
> Whole proxy configuration is  this:
>
> 
> ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000
> 
> 
>SetHandler "proxy:fcgi://php70fpm"
> 
>
>
Thanks a lot for following up and reporting these interesting results! Yann
opened a thread[1] on dev@ to discuss the issue, let's follow up in there
so we don't keep two conversations open.

Luca

[1]:
https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c49aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-31 Thread Hajo Locke

Hello,

Am 22.01.2018 um 11:54 schrieb Hajo Locke:

Hello,

Am 19.01.2018 um 15:48 schrieb Luca Toscano:

Hi Hajo,

2018-01-19 13:23 GMT+01:00 Hajo Locke >:


Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was successfully
done within only a few seconds.

finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 10 total, 10 started, 10 done, 10
succeeded, 0 failed, 0 errored, 0 timeout

so problem seems to be not h2load and basic apache. may be i
should look deeper into proxy_fcgi configuration.
php-fpm configuration is unchanged and was successfully used with
classical fastcgi-benchmark, so i think i have to doublecheck the
proxy.

now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 10 total, 10 started, 10 done, 10
succeeded, 0 failed, 0 errored, 0 timeout

iam surprised by that. i expected a higher performance when
reusing backend connections rather then creating new ones.
I did some further tests and changed some other php-fpm/proxy
values, but once "enablereuse=on" is set, the problem returns.

Should i just run the proxy with enablereuse=off? Or do you have
an other suspicion?



Before giving up I'd check two things:

1) That the same results happen with a regular localhost socket 
rather than a unix one.
I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi. 
Currently i see the same behaviour.
2) What changes on the php-fpm side. Are there more busy workers when 
enablereuse is set to on? I am wondering how php-fpm handles FCGI 
requests happening on the same socket, as opposed to assuming that 1 
connection == 1 FCGI request.
If "enablereuse=off" is set i see a lot of running php-workerprocesses 
(120-130) and high load. Behaviour is like expected.
When set "enablereuse=on" i can see a big change. number of running 
php-workers is really low (~40). The test is running some time and 
then it stucks.
I can see that php-fpm processes are still active and waiting for 
connections, but proxy_fcgi is not using them nor it is establishing 
new connections. loadavg is low and benchmarktest is not able to finalize.
I did some further tests to solve this issue. I set ttl=1 for this Proxy 
and achieved good performance and high number of working childs. But 
this is paradoxical.
proxy_fcgi knows about inactive connection to kill it, but not reenable 
this connection for working.

May be this is helpful to others.
May be a kind of communicationproblem and checking health/busy status 
of php-processes.

Whole proxy configuration is  this:


    ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000


   SetHandler "proxy:fcgi://php70fpm"




Luca


Alltogether i have collected interesting results. this should be 
remarkable for Stefan, because some results are not as expected. I 
will show this results in separate mail, to not mix up with this proxy 
problem.



Thanks,
Hajo



Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-22 Thread Hajo Locke

Hello,

Am 19.01.2018 um 15:48 schrieb Luca Toscano:

Hi Hajo,

2018-01-19 13:23 GMT+01:00 Hajo Locke >:


Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was successfully
done within only a few seconds.

finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 10 total, 10 started, 10 done, 10
succeeded, 0 failed, 0 errored, 0 timeout

so problem seems to be not h2load and basic apache. may be i
should look deeper into proxy_fcgi configuration.
php-fpm configuration is unchanged and was successfully used with
classical fastcgi-benchmark, so i think i have to doublecheck the
proxy.

now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 10 total, 10 started, 10 done, 10
succeeded, 0 failed, 0 errored, 0 timeout

iam surprised by that. i expected a higher performance when
reusing backend connections rather then creating new ones.
I did some further tests and changed some other php-fpm/proxy
values, but once "enablereuse=on" is set, the problem returns.

Should i just run the proxy with enablereuse=off? Or do you have
an other suspicion?



Before giving up I'd check two things:

1) That the same results happen with a regular localhost socket rather 
than a unix one.
I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi. 
Currently i see the same behaviour.
2) What changes on the php-fpm side. Are there more busy workers when 
enablereuse is set to on? I am wondering how php-fpm handles FCGI 
requests happening on the same socket, as opposed to assuming that 1 
connection == 1 FCGI request.
If "enablereuse=off" is set i see a lot of running php-workerprocesses 
(120-130) and high load. Behaviour is like expected.
When set "enablereuse=on" i can see a big change. number of running 
php-workers is really low (~40). The test is running some time and then 
it stucks.
I can see that php-fpm processes are still active and waiting for 
connections, but proxy_fcgi is not using them nor it is establishing new 
connections. loadavg is low and benchmarktest is not able to finalize.
May be a kind of communicationproblem and checking health/busy status of 
php-processes.

Whole proxy configuration is  this:


    ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000


   SetHandler "proxy:fcgi://php70fpm"




Luca


Alltogether i have collected interesting results. this should be 
remarkable for Stefan, because some results are not as expected. I will 
show this results in separate mail, to not mix up with this proxy problem.


Thanks,
Hajo


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-20 Thread Luca Toscano
2018-01-20 20:23 GMT+01:00 Luca Toscano :

> Hi Yann,
>
> 2018-01-19 17:40 GMT+01:00 Yann Ylavic :
>
>> On Fri, Jan 19, 2018 at 5:14 PM, Yann Ylavic 
>> wrote:
>> > On Fri, Jan 19, 2018 at 1:46 PM, Daniel  wrote:
>> >> I vaguely recall some issue with reuse when using unix socket files so
>> >> it was deliberately set to off by default, but yes, perhaps someone
>> >> experienced enough with mod_proxy_fcgi inner workings can shed some
>> >> light on this and the why yes/not.
>> >>
>> >> With socket files I never tried to enable "enablereuse=on" and got
>> >> much successful results, so perhaps it's safer to keep it off until
>> >> someone clarifies this issue, after all when dealing with unix sockets
>> >> the access delays are quite low.
>> >
>> > {en,dis}ablereuse has no effect on Unix Domain Sockets in mod_proxy,
>> > they are never reused.
>>
>> Well, actually it shouldn't, but while the code clearly doesn't reuse
>> sockets (creates a new one for each request), nothing seems to tell
>> the recycler that it should close them unconditionally at the end of
>> the request.
>>
>
> Would you mind to point me to the snippet of code that does this? I am
> trying to reproduce the issue and see if there is a fd leak but didn't
> manage to so far..
>

I am now able to reproduce with Hajo's settings, and indeed with
enablereuse=on I can see a lot of fds leaked via lsof:

httpd 3230 3481www-data   93u unix 0x9ada0cf60400  0t0
   406770 type=STREAM
httpd 3230 3481www-data   94u unix 0x9ada0cf60800  0t0
   406773 type=STREAM
httpd 3230 3481www-data   95u unix 0x9ada0cf66400  0t0
   406776 type=STREAM
[..]

With Yann's patch I cannot seem them anymore, anche h2load does not stop at
50%/60% but completes without any issue. I am still not able to understand
why this happens reading the proxy_util.c code though :)

Luca


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-20 Thread Luca Toscano
Hi Yann,

2018-01-19 17:40 GMT+01:00 Yann Ylavic :

> On Fri, Jan 19, 2018 at 5:14 PM, Yann Ylavic  wrote:
> > On Fri, Jan 19, 2018 at 1:46 PM, Daniel  wrote:
> >> I vaguely recall some issue with reuse when using unix socket files so
> >> it was deliberately set to off by default, but yes, perhaps someone
> >> experienced enough with mod_proxy_fcgi inner workings can shed some
> >> light on this and the why yes/not.
> >>
> >> With socket files I never tried to enable "enablereuse=on" and got
> >> much successful results, so perhaps it's safer to keep it off until
> >> someone clarifies this issue, after all when dealing with unix sockets
> >> the access delays are quite low.
> >
> > {en,dis}ablereuse has no effect on Unix Domain Sockets in mod_proxy,
> > they are never reused.
>
> Well, actually it shouldn't, but while the code clearly doesn't reuse
> sockets (creates a new one for each request), nothing seems to tell
> the recycler that it should close them unconditionally at the end of
> the request.
>

Would you mind to point me to the snippet of code that does this? I am
trying to reproduce the issue and see if there is a fd leak but didn't
manage to so far..

Thanks!

Luca


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Yann Ylavic
On Fri, Jan 19, 2018 at 5:14 PM, Yann Ylavic  wrote:
> On Fri, Jan 19, 2018 at 1:46 PM, Daniel  wrote:
>> I vaguely recall some issue with reuse when using unix socket files so
>> it was deliberately set to off by default, but yes, perhaps someone
>> experienced enough with mod_proxy_fcgi inner workings can shed some
>> light on this and the why yes/not.
>>
>> With socket files I never tried to enable "enablereuse=on" and got
>> much successful results, so perhaps it's safer to keep it off until
>> someone clarifies this issue, after all when dealing with unix sockets
>> the access delays are quite low.
>
> {en,dis}ablereuse has no effect on Unix Domain Sockets in mod_proxy,
> they are never reused.

Well, actually it shouldn't, but while the code clearly doesn't reuse
sockets (creates a new one for each request), nothing seems to tell
the recycler that it should close them unconditionally at the end of
the request.

So there may be a (fd) leak here, which could explain why it does not
work after a while...

I'm thinking of something like this:

Index: modules/proxy/proxy_util.c
===
--- modules/proxy/proxy_util.c(revision 1821662)
+++ modules/proxy/proxy_util.c(working copy)
@@ -2756,6 +2756,8 @@ PROXY_DECLARE(int) ap_proxy_connect_backend(const
 #if APR_HAVE_SYS_UN_H
 if (conn->uds_path)
 {
+conn->close = 1; /* UDS sockets are not recycled */
+
 rv = apr_socket_create(, AF_UNIX, SOCK_STREAM, 0,
conn->scpool);
 if (rv != APR_SUCCESS) {
@@ -2767,7 +2769,6 @@ PROXY_DECLARE(int) ap_proxy_connect_backend(const
  worker->s->hostname);
 break;
 }
-conn->connection = NULL;

 rv = ap_proxy_connect_uds(newsock, conn->uds_path, conn->scpool);
 if (rv != APR_SUCCESS) {
_


Regards,
Yann.

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Yann Ylavic
On Fri, Jan 19, 2018 at 1:46 PM, Daniel  wrote:
> I vaguely recall some issue with reuse when using unix socket files so
> it was deliberately set to off by default, but yes, perhaps someone
> experienced enough with mod_proxy_fcgi inner workings can shed some
> light on this and the why yes/not.
>
> With socket files I never tried to enable "enablereuse=on" and got
> much successful results, so perhaps it's safer to keep it off until
> someone clarifies this issue, after all when dealing with unix sockets
> the access delays are quite low.

{en,dis}ablereuse has no effect on Unix Domain Sockets in mod_proxy,
they are never reused.

Regards,
Yann.

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Luca Toscano
Hi Hajo,

2018-01-19 13:23 GMT+01:00 Hajo Locke :

> Hello,
>
> thanks Daniel and Stefan. This is a good point.
> I did the test with a static file and this test was successfully done
> within only a few seconds.
>
> finished in 20.06s, 4984.80 req/s, 1.27GB/s
> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
> failed, 0 errored, 0 timeout
>
> so problem seems to be not h2load and basic apache. may be i should look
> deeper into proxy_fcgi configuration.
> php-fpm configuration is unchanged and was successfully used with
> classical fastcgi-benchmark, so i think i have to doublecheck the proxy.
>
> now i did this change in proxy:
>
> from
> enablereuse=on
> to
> enablereuse=off
>
> this change leads to a working h2load testrun:
> finished in 51.74s, 1932.87 req/s, 216.05MB/s
> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
> failed, 0 errored, 0 timeout
>
> iam surprised by that. i expected a higher performance when reusing
> backend connections rather then creating new ones.
> I did some further tests and changed some other php-fpm/proxy values, but
> once "enablereuse=on" is set, the problem returns.
>
> Should i just run the proxy with enablereuse=off? Or do you have an other
> suspicion?
>


Before giving up I'd check two things:

1) That the same results happen with a regular localhost socket rather than
a unix one.
2) What changes on the php-fpm side. Are there more busy workers when
enablereuse is set to on? I am wondering how php-fpm handles FCGI requests
happening on the same socket, as opposed to assuming that 1 connection == 1
FCGI request.

Luca


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Daniel
I vaguely recall some issue with reuse when using unix socket files so
it was deliberately set to off by default, but yes, perhaps someone
experienced enough with mod_proxy_fcgi inner workings can shed some
light on this and the why yes/not.

With socket files I never tried to enable "enablereuse=on" and got
much successful results, so perhaps it's safer to keep it off until
someone clarifies this issue, after all when dealing with unix sockets
the access delays are quite low.

2018-01-19 13:30 GMT+01:00 Stefan Eissing :
> Can someone with deeper proxy_(fcgi) knowledge than me jump in here. This 
> goes beyond where my area...
>
>> Am 19.01.2018 um 13:23 schrieb Hajo Locke :
>>
>> Hello,
>>
>> thanks Daniel and Stefan. This is a good point.
>> I did the test with a static file and this test was successfully done within 
>> only a few seconds.
>>
>> finished in 20.06s, 4984.80 req/s, 1.27GB/s
>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
>> failed, 0 errored, 0 timeout
>>
>> so problem seems to be not h2load and basic apache. may be i should look 
>> deeper into proxy_fcgi configuration.
>> php-fpm configuration is unchanged and was successfully used with classical 
>> fastcgi-benchmark, so i think i have to doublecheck the proxy.
>>
>> now i did this change in proxy:
>>
>> from
>> enablereuse=on
>> to
>> enablereuse=off
>>
>> this change leads to a working h2load testrun:
>> finished in 51.74s, 1932.87 req/s, 216.05MB/s
>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
>> failed, 0 errored, 0 timeout
>>
>> iam surprised by that. i expected a higher performance when reusing backend 
>> connections rather then creating new ones.
>> I did some further tests and changed some other php-fpm/proxy values, but 
>> once "enablereuse=on" is set, the problem returns.
>>
>> Should i just run the proxy with enablereuse=off? Or do you have an other 
>> suspicion?
>>
>> Thanks,
>> Hajo
>>
>>
>> Am 19.01.2018 um 12:45 schrieb Daniel:
>>> which are the results exactly and which are the results to a non-php
>>> file such as a gif or similar?
>>>
>>> 2018-01-19 12:38 GMT+01:00 Hajo Locke :
 Hello list,

 i do some http/2 benchmarks on my machine and have problems to finish at
 least one test.

 System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event

 I start h2load with standard-params:

 h2load  -n10 -c100 -m10 https://example.com/phpinfo.php

 first steps are really quick and i can see a progress to 50-70%. but after
 that requests by h2load to server decrease dramatically.
 it seems that h2load ist stopping requests to server, but i dont see any
 reason for that on serverside. i can start a 2nd h2load and this is 
 starting
 furious again, while the first one stucks with no progress, so i can't
 believe there is a serverproblem.

 all serverconfigs are really high, to avoid any kind of bottleneck.

 /etc/apache2/conf.d/limits.conf
 StartServers  10
 MaxClients  500
 MinSpareThreads  450
 MaxSpareThreads  500
 ThreadsPerChild  150
 MaxRequestsPerChild   0
 Serverlimit 500

 my test-vhost just has some default values like servername, docroot etc.
 additional there is the proxy_fcgi config
 
 ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500
 
 
SetHandler "proxy:fcgi://php70fpm/"
 

 fpm-config also has high limits to serve every incoming connection:
 request_terminate_timeout = 7200
 security.limit_extensions = no
 listen = /dev/shm/php70fpm.sock
 listen.owner = myuser
 listen.group = mygroup
 listen.mode = 0660
 user = myuser
 group = mygroup
 pm = ondemand
 pm.max_children = 500
 pm.max_requests = 2000
 catch_workers_output = yes

 Currently i have no explanation for this. a really fast start and then
 decreasing to low-activity.  but i cant see that limits are reached or
 processes not respond.
 Possible to have a problem in h2load or a hidden problem in my
 configuration? Is there an other recommend way to do a 
 h2-speedbenchmarking?

 before using proxy_fcgi i used the classical mod_fastcgi with
 fastcgiexternalserver and did not have this kind of problems. with
 mod_fastcgi the test could complete.
 Currently iam stumped and need a hint please.

 Thanks,
 Hajo


 -
 To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 For additional commands, e-mail: users-h...@httpd.apache.org

>>>
>>>
>>
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>> For additional commands, e-mail: 

Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Hajo Locke

Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was successfully done 
within only a few seconds.


finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
failed, 0 errored, 0 timeout


so problem seems to be not h2load and basic apache. may be i should look 
deeper into proxy_fcgi configuration.
php-fpm configuration is unchanged and was successfully used with 
classical fastcgi-benchmark, so i think i have to doublecheck the proxy.


now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
failed, 0 errored, 0 timeout


iam surprised by that. i expected a higher performance when reusing 
backend connections rather then creating new ones.
I did some further tests and changed some other php-fpm/proxy values, 
but once "enablereuse=on" is set, the problem returns.


Should i just run the proxy with enablereuse=off? Or do you have an 
other suspicion?


Thanks,
Hajo


Am 19.01.2018 um 12:45 schrieb Daniel:

which are the results exactly and which are the results to a non-php
file such as a gif or similar?

2018-01-19 12:38 GMT+01:00 Hajo Locke :

Hello list,

i do some http/2 benchmarks on my machine and have problems to finish at
least one test.

System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event

I start h2load with standard-params:

h2load  -n10 -c100 -m10 https://example.com/phpinfo.php

first steps are really quick and i can see a progress to 50-70%. but after
that requests by h2load to server decrease dramatically.
it seems that h2load ist stopping requests to server, but i dont see any
reason for that on serverside. i can start a 2nd h2load and this is starting
furious again, while the first one stucks with no progress, so i can't
believe there is a serverproblem.

all serverconfigs are really high, to avoid any kind of bottleneck.

/etc/apache2/conf.d/limits.conf
StartServers  10
MaxClients  500
MinSpareThreads  450
MaxSpareThreads  500
ThreadsPerChild  150
MaxRequestsPerChild   0
Serverlimit 500

my test-vhost just has some default values like servername, docroot etc.
additional there is the proxy_fcgi config

 ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500


SetHandler "proxy:fcgi://php70fpm/"


fpm-config also has high limits to serve every incoming connection:
request_terminate_timeout = 7200
security.limit_extensions = no
listen = /dev/shm/php70fpm.sock
listen.owner = myuser
listen.group = mygroup
listen.mode = 0660
user = myuser
group = mygroup
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000
catch_workers_output = yes

Currently i have no explanation for this. a really fast start and then
decreasing to low-activity.  but i cant see that limits are reached or
processes not respond.
Possible to have a problem in h2load or a hidden problem in my
configuration? Is there an other recommend way to do a h2-speedbenchmarking?

before using proxy_fcgi i used the classical mod_fastcgi with
fastcgiexternalserver and did not have this kind of problems. with
mod_fastcgi the test could complete.
Currently iam stumped and need a hint please.

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org







-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Stefan Eissing
Can someone with deeper proxy_(fcgi) knowledge than me jump in here. This goes 
beyond where my area...

> Am 19.01.2018 um 13:23 schrieb Hajo Locke :
> 
> Hello,
> 
> thanks Daniel and Stefan. This is a good point.
> I did the test with a static file and this test was successfully done within 
> only a few seconds.
> 
> finished in 20.06s, 4984.80 req/s, 1.27GB/s
> requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
> failed, 0 errored, 0 timeout
> 
> so problem seems to be not h2load and basic apache. may be i should look 
> deeper into proxy_fcgi configuration.
> php-fpm configuration is unchanged and was successfully used with classical 
> fastcgi-benchmark, so i think i have to doublecheck the proxy.
> 
> now i did this change in proxy:
> 
> from
> enablereuse=on
> to
> enablereuse=off
> 
> this change leads to a working h2load testrun:
> finished in 51.74s, 1932.87 req/s, 216.05MB/s
> requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
> failed, 0 errored, 0 timeout
> 
> iam surprised by that. i expected a higher performance when reusing backend 
> connections rather then creating new ones.
> I did some further tests and changed some other php-fpm/proxy values, but 
> once "enablereuse=on" is set, the problem returns.
> 
> Should i just run the proxy with enablereuse=off? Or do you have an other 
> suspicion?
> 
> Thanks,
> Hajo
> 
> 
> Am 19.01.2018 um 12:45 schrieb Daniel:
>> which are the results exactly and which are the results to a non-php
>> file such as a gif or similar?
>> 
>> 2018-01-19 12:38 GMT+01:00 Hajo Locke :
>>> Hello list,
>>> 
>>> i do some http/2 benchmarks on my machine and have problems to finish at
>>> least one test.
>>> 
>>> System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event
>>> 
>>> I start h2load with standard-params:
>>> 
>>> h2load  -n10 -c100 -m10 https://example.com/phpinfo.php
>>> 
>>> first steps are really quick and i can see a progress to 50-70%. but after
>>> that requests by h2load to server decrease dramatically.
>>> it seems that h2load ist stopping requests to server, but i dont see any
>>> reason for that on serverside. i can start a 2nd h2load and this is starting
>>> furious again, while the first one stucks with no progress, so i can't
>>> believe there is a serverproblem.
>>> 
>>> all serverconfigs are really high, to avoid any kind of bottleneck.
>>> 
>>> /etc/apache2/conf.d/limits.conf
>>> StartServers  10
>>> MaxClients  500
>>> MinSpareThreads  450
>>> MaxSpareThreads  500
>>> ThreadsPerChild  150
>>> MaxRequestsPerChild   0
>>> Serverlimit 500
>>> 
>>> my test-vhost just has some default values like servername, docroot etc.
>>> additional there is the proxy_fcgi config
>>> 
>>> ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500
>>> 
>>> 
>>>SetHandler "proxy:fcgi://php70fpm/"
>>> 
>>> 
>>> fpm-config also has high limits to serve every incoming connection:
>>> request_terminate_timeout = 7200
>>> security.limit_extensions = no
>>> listen = /dev/shm/php70fpm.sock
>>> listen.owner = myuser
>>> listen.group = mygroup
>>> listen.mode = 0660
>>> user = myuser
>>> group = mygroup
>>> pm = ondemand
>>> pm.max_children = 500
>>> pm.max_requests = 2000
>>> catch_workers_output = yes
>>> 
>>> Currently i have no explanation for this. a really fast start and then
>>> decreasing to low-activity.  but i cant see that limits are reached or
>>> processes not respond.
>>> Possible to have a problem in h2load or a hidden problem in my
>>> configuration? Is there an other recommend way to do a h2-speedbenchmarking?
>>> 
>>> before using proxy_fcgi i used the classical mod_fastcgi with
>>> fastcgiexternalserver and did not have this kind of problems. with
>>> mod_fastcgi the test could complete.
>>> Currently iam stumped and need a hint please.
>>> 
>>> Thanks,
>>> Hajo
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>>> For additional commands, e-mail: users-h...@httpd.apache.org
>>> 
>> 
>> 
> 
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
> 


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Stefan Eissing
Hej Hajo,

do you have the same effect with less connections? e.g.

> h2load  -n10 -c10 -m10 https://example.com/phpinfo.php

and, as Daniel just wrote, do you have similar problems when serving static 
files?

(just to track down where to look)

-Stefan

> Am 19.01.2018 um 12:38 schrieb Hajo Locke :
> 
> Hello list,
> 
> i do some http/2 benchmarks on my machine and have problems to finish at 
> least one test.
> 
> System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event
> 
> I start h2load with standard-params:
> 
> h2load  -n10 -c100 -m10 https://example.com/phpinfo.php
> 
> first steps are really quick and i can see a progress to 50-70%. but after 
> that requests by h2load to server decrease dramatically.
> it seems that h2load ist stopping requests to server, but i dont see any 
> reason for that on serverside. i can start a 2nd h2load and this is starting 
> furious again, while the first one stucks with no progress, so i can't 
> believe there is a serverproblem.
> 
> all serverconfigs are really high, to avoid any kind of bottleneck.
> 
> /etc/apache2/conf.d/limits.conf
> StartServers  10
> MaxClients  500
> MinSpareThreads  450
> MaxSpareThreads  500
> ThreadsPerChild  150
> MaxRequestsPerChild   0
> Serverlimit 500
> 
> my test-vhost just has some default values like servername, docroot etc. 
> additional there is the proxy_fcgi config
> 
> ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500
> 
> 
>SetHandler "proxy:fcgi://php70fpm/"
> 
> 
> fpm-config also has high limits to serve every incoming connection:
> request_terminate_timeout = 7200
> security.limit_extensions = no
> listen = /dev/shm/php70fpm.sock
> listen.owner = myuser
> listen.group = mygroup
> listen.mode = 0660
> user = myuser
> group = mygroup
> pm = ondemand
> pm.max_children = 500
> pm.max_requests = 2000
> catch_workers_output = yes
> 
> Currently i have no explanation for this. a really fast start and then 
> decreasing to low-activity.  but i cant see that limits are reached or 
> processes not respond.
> Possible to have a problem in h2load or a hidden problem in my configuration? 
> Is there an other recommend way to do a h2-speedbenchmarking?
> 
> before using proxy_fcgi i used the classical mod_fastcgi with 
> fastcgiexternalserver and did not have this kind of problems. with 
> mod_fastcgi the test could complete.
> Currently iam stumped and need a hint please.
> 
> Thanks,
> Hajo
> 
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
> 


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Daniel
which are the results exactly and which are the results to a non-php
file such as a gif or similar?

2018-01-19 12:38 GMT+01:00 Hajo Locke :
> Hello list,
>
> i do some http/2 benchmarks on my machine and have problems to finish at
> least one test.
>
> System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event
>
> I start h2load with standard-params:
>
> h2load  -n10 -c100 -m10 https://example.com/phpinfo.php
>
> first steps are really quick and i can see a progress to 50-70%. but after
> that requests by h2load to server decrease dramatically.
> it seems that h2load ist stopping requests to server, but i dont see any
> reason for that on serverside. i can start a 2nd h2load and this is starting
> furious again, while the first one stucks with no progress, so i can't
> believe there is a serverproblem.
>
> all serverconfigs are really high, to avoid any kind of bottleneck.
>
> /etc/apache2/conf.d/limits.conf
> StartServers  10
> MaxClients  500
> MinSpareThreads  450
> MaxSpareThreads  500
> ThreadsPerChild  150
> MaxRequestsPerChild   0
> Serverlimit 500
>
> my test-vhost just has some default values like servername, docroot etc.
> additional there is the proxy_fcgi config
> 
> ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500
> 
> 
>SetHandler "proxy:fcgi://php70fpm/"
> 
>
> fpm-config also has high limits to serve every incoming connection:
> request_terminate_timeout = 7200
> security.limit_extensions = no
> listen = /dev/shm/php70fpm.sock
> listen.owner = myuser
> listen.group = mygroup
> listen.mode = 0660
> user = myuser
> group = mygroup
> pm = ondemand
> pm.max_children = 500
> pm.max_requests = 2000
> catch_workers_output = yes
>
> Currently i have no explanation for this. a really fast start and then
> decreasing to low-activity.  but i cant see that limits are reached or
> processes not respond.
> Possible to have a problem in h2load or a hidden problem in my
> configuration? Is there an other recommend way to do a h2-speedbenchmarking?
>
> before using proxy_fcgi i used the classical mod_fastcgi with
> fastcgiexternalserver and did not have this kind of problems. with
> mod_fastcgi the test could complete.
> Currently iam stumped and need a hint please.
>
> Thanks,
> Hajo
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>



-- 
Daniel Ferradal
IT Specialist

email dferradal at gmail.com
linkedin es.linkedin.com/in/danielferradal

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Hajo Locke

Hello list,

i do some http/2 benchmarks on my machine and have problems to finish at 
least one test.


System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event

I start h2load with standard-params:

h2load  -n10 -c100 -m10 https://example.com/phpinfo.php

first steps are really quick and i can see a progress to 50-70%. but 
after that requests by h2load to server decrease dramatically.
it seems that h2load ist stopping requests to server, but i dont see any 
reason for that on serverside. i can start a 2nd h2load and this is 
starting furious again, while the first one stucks with no progress, so 
i can't believe there is a serverproblem.


all serverconfigs are really high, to avoid any kind of bottleneck.

/etc/apache2/conf.d/limits.conf
StartServers  10
MaxClients  500
MinSpareThreads  450
MaxSpareThreads  500
ThreadsPerChild  150
MaxRequestsPerChild   0
Serverlimit 500

my test-vhost just has some default values like servername, docroot etc. 
additional there is the proxy_fcgi config


    ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500


   SetHandler "proxy:fcgi://php70fpm/"


fpm-config also has high limits to serve every incoming connection:
request_terminate_timeout = 7200
security.limit_extensions = no
listen = /dev/shm/php70fpm.sock
listen.owner = myuser
listen.group = mygroup
listen.mode = 0660
user = myuser
group = mygroup
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000
catch_workers_output = yes

Currently i have no explanation for this. a really fast start and then 
decreasing to low-activity.  but i cant see that limits are reached or 
processes not respond.
Possible to have a problem in h2load or a hidden problem in my 
configuration? Is there an other recommend way to do a h2-speedbenchmarking?


before using proxy_fcgi i used the classical mod_fastcgi with 
fastcgiexternalserver and did not have this kind of problems. with 
mod_fastcgi the test could complete.

Currently iam stumped and need a hint please.

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org