Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-31 Thread Bhathiya Jayasekara
Hi Ishara,

On Sun, Jul 31, 2016 at 1:04 PM, Ishara Karunarathna 
wrote:

>
> With 2s worm up and then 10s ramp up period
>

Earlier we had a ramp up period of 60s. Not sure that has any effect here.
Just mentioning.

Thanks,
Bhathiya



 Did you look at the behaviour of latency?

 Thanks

 Malith


 On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
 indu...@wso2.com> wrote:

> Hi,
>
> We are currently engaged into a performance analysis where we are
> analyzing performance for User Add, Update, Authentication operations. The
> testing has been carried out in a following environment with 500
> concurrency and users up to 10 million.
>
> *Environment :*
>
> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
> MySQL 5.7
> Ubuntu 14.04
> Openldap-2.4.31
> IS 5.1.0
>
> In order to optimize the MYSQL server, following server parameters
> have been tuned accordingly. We have referred MYSQL documentation [1] as
> well as have performed analysis using several MYSQL tuners in [2].
>
> (1) *max_connections : 1000* (The maximum permitted number of
> simultaneous client connections.)
>
> (2) *join_buffer_size : 259968* (The minimum size of the buffer that
> is used for plain index scans, range index scans, and joins that do not 
> use
> indexes and thus perform full table scans.)
>
> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
> memory area where InnoDB caches table and index data)
>
> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
> transactions that have not been committed yet)
>
> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
> instances. According to the mysql documentation[1], on systems with a 
> large
> amount of memory, we can improve concurrency by dividing the buffer pool
> into multiple buffer pool instances. But couldn't change since it's a read
> only variable)
>
> (6) *key_buffer_size : 38400* (size of the buffer used for index
> blocks)
>
> (7) *table_open_cache : 4000* (The number of open tables for all
> threads)
>
> (8) *sort_buffer_size : 400* (Each session that must perform a
> sort allocates a buffer of this size)
>
> (9) *read_buffer_size : 100* (Each thread that does a sequential
> scan for a table allocates a buffer of this size for each table it scans.
> If we do many sequential scans, we might want to increase this value)
>
> (10) *query_cache_type : 0 *
>
> (11) *query_cache_limit : 1048576* (Do not cache results that are
> larger than this number of bytes)
>
> (12) *query_cache_size : 1048576* (The amount of memory allocated for
> caching query results)
>
> (13) *thread_stack : 262144* (The stack size for each thread)
>
> (14) *net_buffer_length : 16384* (Each client thread is associated
> with a connection buffer and result buffer. Both begin with a size given 
> by
> net_buffer_length but are dynamically enlarged up to max_allowed_packet
> bytes as needed)
>
> (15) *max_allowed_packet : 4194304* (The maximum size of one packet
> or any generated/intermediate string)
>
> (16) *thread_cache_size : 30* (no of threads the server should cache
> for reuse)
>
>
>
> IS has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 2g *
>
> *Xmx : 2g *
>
> (2) Removed following entry from
> /repository/conf/tomcat/catalina-server.xml to disable http 
> access
> logs.
>
>  directory="${carbon.home}/repository/logs" prefix="http_access_"
> suffix=".log" pattern="combined" />
>
> (3) Tuned following parameters in axis2client.xml file.
>
> 1000
>
> 3
>
> (4) Added following additional parameters to optimize database
> connection pool.
>
> 6
>
> 600
>
> 20
>
> (5) Tuning Tomcat parameters in
> /repository/conf/tomcat/catalina-server.xml.
>
> *acceptorThreadCount = 8 *
>
> *maxThreads="750" *
>
> *minSpareThreads="150" *
>
> *maxKeepAliveRequests="600" *
>
> *acceptCount="600"*
>
>
>
> JMeter has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 1g *
>
> *Xmx : 1g *
>
>
> We were able to optimize the environment up to some level. But*
> currently the TPS is dropping from the initial TPS 1139.5/s to 198.1/s in
> around 610 user count.(User Add)*
>
> Appreciate your help on figuring out whether we need to do any
> modifications to the optimizations in MYSQL, IS and JMeter servers 

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-31 Thread Ishara Karunarathna
Hi Prabath,

Maduranga is running same test in the openstack once its completed will
compare all.

BR,
Ishara

On Sun, Jul 31, 2016 at 10:14 PM, Prabath Siriwardana 
wrote:

> Can you please compare the results you are getting now with the results we
> got a week before in the same setup...? I guess we could get  ~1200 tps
> with 500 concurrency for 1M users, without any drop in the tps...?
>
> Thanks & regards,
> -Prabath
>
> On Sun, Jul 31, 2016 at 12:34 AM, Ishara Karunarathna 
> wrote:
>
>>
>>
>> On Sun, Jul 31, 2016 at 12:59 PM, Malith Jayasinghe 
>> wrote:
>>
>>>
>>>
>>> On Sun, Jul 31, 2016 at 12:49 PM, Ishara Karunarathna 
>>> wrote:
>>>
 Hi Malith,

 On Sun, Jul 31, 2016 at 12:37 PM, Malith Jayasinghe 
 wrote:

> HI Indunil,
> Just a few question regarding this performance test you have done:
>
> What is the reason for selecting the concurrency = 500 here?
>
 This is the user expected concurrency level. Thats the reason we user
 this.

>
> Have you tested the behaviour for lower concurrency levels?
>
> *"currently the TPS is dropping from the initial TPS 1139.5/s to
> 198.1/s in around 610 user count.(User Add)" - *How did you
> notice/measure this drop in TPS? Did you analyze the jmeter results
> offline? After it drops, does it improve after some time or does it stay
> the same?
>
 We test this with the Jmeter summery report.
 with latest results if we start again few min (2min) we can get this
 max tps and come down to around 250tps

>>>
>>> Ok so it comes down to 250tps and stays there?  Are you running these
>>> tests without a warm-up period?
>>>
>> Nop.
>>
>> With 2s worm up and then 10s ramp up period
>>
>>>
>
> Did you look at the behaviour of latency?
>
> Thanks
>
> Malith
>
>
> On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
> indu...@wso2.com> wrote:
>
>> Hi,
>>
>> We are currently engaged into a performance analysis where we are
>> analyzing performance for User Add, Update, Authentication operations. 
>> The
>> testing has been carried out in a following environment with 500
>> concurrency and users up to 10 million.
>>
>> *Environment :*
>>
>> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
>> MySQL 5.7
>> Ubuntu 14.04
>> Openldap-2.4.31
>> IS 5.1.0
>>
>> In order to optimize the MYSQL server, following server parameters
>> have been tuned accordingly. We have referred MYSQL documentation [1] as
>> well as have performed analysis using several MYSQL tuners in [2].
>>
>> (1) *max_connections : 1000* (The maximum permitted number of
>> simultaneous client connections.)
>>
>> (2) *join_buffer_size : 259968* (The minimum size of the buffer that
>> is used for plain index scans, range index scans, and joins that do not 
>> use
>> indexes and thus perform full table scans.)
>>
>> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
>> memory area where InnoDB caches table and index data)
>>
>> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
>> transactions that have not been committed yet)
>>
>> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
>> instances. According to the mysql documentation[1], on systems with a 
>> large
>> amount of memory, we can improve concurrency by dividing the buffer pool
>> into multiple buffer pool instances. But couldn't change since it's a 
>> read
>> only variable)
>>
>> (6) *key_buffer_size : 38400* (size of the buffer used for index
>> blocks)
>>
>> (7) *table_open_cache : 4000* (The number of open tables for all
>> threads)
>>
>> (8) *sort_buffer_size : 400* (Each session that must perform a
>> sort allocates a buffer of this size)
>>
>> (9) *read_buffer_size : 100* (Each thread that does a sequential
>> scan for a table allocates a buffer of this size for each table it scans.
>> If we do many sequential scans, we might want to increase this value)
>>
>> (10) *query_cache_type : 0 *
>>
>> (11) *query_cache_limit : 1048576* (Do not cache results that are
>> larger than this number of bytes)
>>
>> (12) *query_cache_size : 1048576* (The amount of memory allocated
>> for caching query results)
>>
>> (13) *thread_stack : 262144* (The stack size for each thread)
>>
>> (14) *net_buffer_length : 16384* (Each client thread is associated
>> with a connection buffer and result buffer. Both begin with a size given 
>> by
>> net_buffer_length but are dynamically enlarged up to max_allowed_packet
>> bytes as needed)
>>
>> (15) *max_allowed_packet : 4194304* (The maximum size of one packet
>> or any generated/intermediate string)
>>
>> (16) *thre

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-31 Thread Prabath Siriwardana
Can you please compare the results you are getting now with the results we
got a week before in the same setup...? I guess we could get  ~1200 tps
with 500 concurrency for 1M users, without any drop in the tps...?

Thanks & regards,
-Prabath

On Sun, Jul 31, 2016 at 12:34 AM, Ishara Karunarathna 
wrote:

>
>
> On Sun, Jul 31, 2016 at 12:59 PM, Malith Jayasinghe 
> wrote:
>
>>
>>
>> On Sun, Jul 31, 2016 at 12:49 PM, Ishara Karunarathna 
>> wrote:
>>
>>> Hi Malith,
>>>
>>> On Sun, Jul 31, 2016 at 12:37 PM, Malith Jayasinghe 
>>> wrote:
>>>
 HI Indunil,
 Just a few question regarding this performance test you have done:

 What is the reason for selecting the concurrency = 500 here?

>>> This is the user expected concurrency level. Thats the reason we user
>>> this.
>>>

 Have you tested the behaviour for lower concurrency levels?

 *"currently the TPS is dropping from the initial TPS 1139.5/s to
 198.1/s in around 610 user count.(User Add)" - *How did you
 notice/measure this drop in TPS? Did you analyze the jmeter results
 offline? After it drops, does it improve after some time or does it stay
 the same?

>>> We test this with the Jmeter summery report.
>>> with latest results if we start again few min (2min) we can get this max
>>> tps and come down to around 250tps
>>>
>>
>> Ok so it comes down to 250tps and stays there?  Are you running these
>> tests without a warm-up period?
>>
> Nop.
>
> With 2s worm up and then 10s ramp up period
>
>>

 Did you look at the behaviour of latency?

 Thanks

 Malith


 On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
 indu...@wso2.com> wrote:

> Hi,
>
> We are currently engaged into a performance analysis where we are
> analyzing performance for User Add, Update, Authentication operations. The
> testing has been carried out in a following environment with 500
> concurrency and users up to 10 million.
>
> *Environment :*
>
> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
> MySQL 5.7
> Ubuntu 14.04
> Openldap-2.4.31
> IS 5.1.0
>
> In order to optimize the MYSQL server, following server parameters
> have been tuned accordingly. We have referred MYSQL documentation [1] as
> well as have performed analysis using several MYSQL tuners in [2].
>
> (1) *max_connections : 1000* (The maximum permitted number of
> simultaneous client connections.)
>
> (2) *join_buffer_size : 259968* (The minimum size of the buffer that
> is used for plain index scans, range index scans, and joins that do not 
> use
> indexes and thus perform full table scans.)
>
> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
> memory area where InnoDB caches table and index data)
>
> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
> transactions that have not been committed yet)
>
> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
> instances. According to the mysql documentation[1], on systems with a 
> large
> amount of memory, we can improve concurrency by dividing the buffer pool
> into multiple buffer pool instances. But couldn't change since it's a read
> only variable)
>
> (6) *key_buffer_size : 38400* (size of the buffer used for index
> blocks)
>
> (7) *table_open_cache : 4000* (The number of open tables for all
> threads)
>
> (8) *sort_buffer_size : 400* (Each session that must perform a
> sort allocates a buffer of this size)
>
> (9) *read_buffer_size : 100* (Each thread that does a sequential
> scan for a table allocates a buffer of this size for each table it scans.
> If we do many sequential scans, we might want to increase this value)
>
> (10) *query_cache_type : 0 *
>
> (11) *query_cache_limit : 1048576* (Do not cache results that are
> larger than this number of bytes)
>
> (12) *query_cache_size : 1048576* (The amount of memory allocated for
> caching query results)
>
> (13) *thread_stack : 262144* (The stack size for each thread)
>
> (14) *net_buffer_length : 16384* (Each client thread is associated
> with a connection buffer and result buffer. Both begin with a size given 
> by
> net_buffer_length but are dynamically enlarged up to max_allowed_packet
> bytes as needed)
>
> (15) *max_allowed_packet : 4194304* (The maximum size of one packet
> or any generated/intermediate string)
>
> (16) *thread_cache_size : 30* (no of threads the server should cache
> for reuse)
>
>
>
> IS has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 2g *
>
> *Xmx : 2g *
>
> (2) Removed following entry from
> /reposit

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-31 Thread Ishara Karunarathna
On Sun, Jul 31, 2016 at 12:59 PM, Malith Jayasinghe 
wrote:

>
>
> On Sun, Jul 31, 2016 at 12:49 PM, Ishara Karunarathna 
> wrote:
>
>> Hi Malith,
>>
>> On Sun, Jul 31, 2016 at 12:37 PM, Malith Jayasinghe 
>> wrote:
>>
>>> HI Indunil,
>>> Just a few question regarding this performance test you have done:
>>>
>>> What is the reason for selecting the concurrency = 500 here?
>>>
>> This is the user expected concurrency level. Thats the reason we user
>> this.
>>
>>>
>>> Have you tested the behaviour for lower concurrency levels?
>>>
>>> *"currently the TPS is dropping from the initial TPS 1139.5/s to 198.1/s
>>> in around 610 user count.(User Add)" - *How did you notice/measure
>>> this drop in TPS? Did you analyze the jmeter results offline? After it
>>> drops, does it improve after some time or does it stay the same?
>>>
>> We test this with the Jmeter summery report.
>> with latest results if we start again few min (2min) we can get this max
>> tps and come down to around 250tps
>>
>
> Ok so it comes down to 250tps and stays there?  Are you running these
> tests without a warm-up period?
>
Nop.

With 2s worm up and then 10s ramp up period

>
>>>
>>> Did you look at the behaviour of latency?
>>>
>>> Thanks
>>>
>>> Malith
>>>
>>>
>>> On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
>>> indu...@wso2.com> wrote:
>>>
 Hi,

 We are currently engaged into a performance analysis where we are
 analyzing performance for User Add, Update, Authentication operations. The
 testing has been carried out in a following environment with 500
 concurrency and users up to 10 million.

 *Environment :*

 m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
 MySQL 5.7
 Ubuntu 14.04
 Openldap-2.4.31
 IS 5.1.0

 In order to optimize the MYSQL server, following server parameters have
 been tuned accordingly. We have referred MYSQL documentation [1] as well as
 have performed analysis using several MYSQL tuners in [2].

 (1) *max_connections : 1000* (The maximum permitted number of
 simultaneous client connections.)

 (2) *join_buffer_size : 259968* (The minimum size of the buffer that
 is used for plain index scans, range index scans, and joins that do not use
 indexes and thus perform full table scans.)

 (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
 memory area where InnoDB caches table and index data)

 (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
 transactions that have not been committed yet)

 (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
 instances. According to the mysql documentation[1], on systems with a large
 amount of memory, we can improve concurrency by dividing the buffer pool
 into multiple buffer pool instances. But couldn't change since it's a read
 only variable)

 (6) *key_buffer_size : 38400* (size of the buffer used for index
 blocks)

 (7) *table_open_cache : 4000* (The number of open tables for all
 threads)

 (8) *sort_buffer_size : 400* (Each session that must perform a
 sort allocates a buffer of this size)

 (9) *read_buffer_size : 100* (Each thread that does a sequential
 scan for a table allocates a buffer of this size for each table it scans.
 If we do many sequential scans, we might want to increase this value)

 (10) *query_cache_type : 0 *

 (11) *query_cache_limit : 1048576* (Do not cache results that are
 larger than this number of bytes)

 (12) *query_cache_size : 1048576* (The amount of memory allocated for
 caching query results)

 (13) *thread_stack : 262144* (The stack size for each thread)

 (14) *net_buffer_length : 16384* (Each client thread is associated
 with a connection buffer and result buffer. Both begin with a size given by
 net_buffer_length but are dynamically enlarged up to max_allowed_packet
 bytes as needed)

 (15) *max_allowed_packet : 4194304* (The maximum size of one packet or
 any generated/intermediate string)

 (16) *thread_cache_size : 30* (no of threads the server should cache
 for reuse)



 IS has been configured as follows to optimize the performance.

 (1) JVM Heap Settings (-Xms -Xmx) changed as follows:

 *Xms : 2g *

 *Xmx : 2g *

 (2) Removed following entry from
 /repository/conf/tomcat/catalina-server.xml to disable http access
 logs.

 >>> directory="${carbon.home}/repository/logs" prefix="http_access_"
 suffix=".log" pattern="combined" />

 (3) Tuned following parameters in axis2client.xml file.

 1000

 3

 (4) Added following additional parameters to optimize database
 connection pool.

 6

 600

 20

 (5) Tuning Tomcat parameters in
 /repo

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-31 Thread Malith Jayasinghe
On Sun, Jul 31, 2016 at 12:49 PM, Ishara Karunarathna 
wrote:

> Hi Malith,
>
> On Sun, Jul 31, 2016 at 12:37 PM, Malith Jayasinghe 
> wrote:
>
>> HI Indunil,
>> Just a few question regarding this performance test you have done:
>>
>> What is the reason for selecting the concurrency = 500 here?
>>
> This is the user expected concurrency level. Thats the reason we user
> this.
>
>>
>> Have you tested the behaviour for lower concurrency levels?
>>
>> *"currently the TPS is dropping from the initial TPS 1139.5/s to 198.1/s
>> in around 610 user count.(User Add)" - *How did you notice/measure
>> this drop in TPS? Did you analyze the jmeter results offline? After it
>> drops, does it improve after some time or does it stay the same?
>>
> We test this with the Jmeter summery report.
> with latest results if we start again few min (2min) we can get this max
> tps and come down to around 250tps
>

Ok so it comes down to 250tps and stays there?  Are you running these tests
without a warm-up period?

>
>>
>> Did you look at the behaviour of latency?
>>
>> Thanks
>>
>> Malith
>>
>>
>> On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
>> indu...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> We are currently engaged into a performance analysis where we are
>>> analyzing performance for User Add, Update, Authentication operations. The
>>> testing has been carried out in a following environment with 500
>>> concurrency and users up to 10 million.
>>>
>>> *Environment :*
>>>
>>> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
>>> MySQL 5.7
>>> Ubuntu 14.04
>>> Openldap-2.4.31
>>> IS 5.1.0
>>>
>>> In order to optimize the MYSQL server, following server parameters have
>>> been tuned accordingly. We have referred MYSQL documentation [1] as well as
>>> have performed analysis using several MYSQL tuners in [2].
>>>
>>> (1) *max_connections : 1000* (The maximum permitted number of
>>> simultaneous client connections.)
>>>
>>> (2) *join_buffer_size : 259968* (The minimum size of the buffer that is
>>> used for plain index scans, range index scans, and joins that do not use
>>> indexes and thus perform full table scans.)
>>>
>>> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
>>> memory area where InnoDB caches table and index data)
>>>
>>> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
>>> transactions that have not been committed yet)
>>>
>>> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
>>> instances. According to the mysql documentation[1], on systems with a large
>>> amount of memory, we can improve concurrency by dividing the buffer pool
>>> into multiple buffer pool instances. But couldn't change since it's a read
>>> only variable)
>>>
>>> (6) *key_buffer_size : 38400* (size of the buffer used for index
>>> blocks)
>>>
>>> (7) *table_open_cache : 4000* (The number of open tables for all
>>> threads)
>>>
>>> (8) *sort_buffer_size : 400* (Each session that must perform a sort
>>> allocates a buffer of this size)
>>>
>>> (9) *read_buffer_size : 100* (Each thread that does a sequential
>>> scan for a table allocates a buffer of this size for each table it scans.
>>> If we do many sequential scans, we might want to increase this value)
>>>
>>> (10) *query_cache_type : 0 *
>>>
>>> (11) *query_cache_limit : 1048576* (Do not cache results that are
>>> larger than this number of bytes)
>>>
>>> (12) *query_cache_size : 1048576* (The amount of memory allocated for
>>> caching query results)
>>>
>>> (13) *thread_stack : 262144* (The stack size for each thread)
>>>
>>> (14) *net_buffer_length : 16384* (Each client thread is associated with
>>> a connection buffer and result buffer. Both begin with a size given by
>>> net_buffer_length but are dynamically enlarged up to max_allowed_packet
>>> bytes as needed)
>>>
>>> (15) *max_allowed_packet : 4194304* (The maximum size of one packet or
>>> any generated/intermediate string)
>>>
>>> (16) *thread_cache_size : 30* (no of threads the server should cache
>>> for reuse)
>>>
>>>
>>>
>>> IS has been configured as follows to optimize the performance.
>>>
>>> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>>>
>>> *Xms : 2g *
>>>
>>> *Xmx : 2g *
>>>
>>> (2) Removed following entry from
>>> /repository/conf/tomcat/catalina-server.xml to disable http access
>>> logs.
>>>
>>> >> directory="${carbon.home}/repository/logs" prefix="http_access_"
>>> suffix=".log" pattern="combined" />
>>>
>>> (3) Tuned following parameters in axis2client.xml file.
>>>
>>> 1000
>>>
>>> 3
>>>
>>> (4) Added following additional parameters to optimize database
>>> connection pool.
>>>
>>> 6
>>>
>>> 600
>>>
>>> 20
>>>
>>> (5) Tuning Tomcat parameters in
>>> /repository/conf/tomcat/catalina-server.xml.
>>>
>>> *acceptorThreadCount = 8 *
>>>
>>> *maxThreads="750" *
>>>
>>> *minSpareThreads="150" *
>>>
>>> *maxKeepAliveRequests="600" *
>>>
>>> *acceptCount="600"*
>>>
>>>
>>>
>>> JMeter has been configured as follows to optimize th

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-31 Thread Ishara Karunarathna
Hi Malith,

On Sun, Jul 31, 2016 at 12:37 PM, Malith Jayasinghe 
wrote:

> HI Indunil,
> Just a few question regarding this performance test you have done:
>
> What is the reason for selecting the concurrency = 500 here?
>
This is the user expected concurrency level. Thats the reason we user this.

>
> Have you tested the behaviour for lower concurrency levels?
>
> *"currently the TPS is dropping from the initial TPS 1139.5/s to 198.1/s
> in around 610 user count.(User Add)" - *How did you notice/measure
> this drop in TPS? Did you analyze the jmeter results offline? After it
> drops, does it improve after some time or does it stay the same?
>
We test this with the Jmeter summery report.
with latest results if we start again few min (2min) we can get this max
tps and come down to around 250tps

>
>
> Did you look at the behaviour of latency?
>
> Thanks
>
> Malith
>
>
> On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
> indu...@wso2.com> wrote:
>
>> Hi,
>>
>> We are currently engaged into a performance analysis where we are
>> analyzing performance for User Add, Update, Authentication operations. The
>> testing has been carried out in a following environment with 500
>> concurrency and users up to 10 million.
>>
>> *Environment :*
>>
>> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
>> MySQL 5.7
>> Ubuntu 14.04
>> Openldap-2.4.31
>> IS 5.1.0
>>
>> In order to optimize the MYSQL server, following server parameters have
>> been tuned accordingly. We have referred MYSQL documentation [1] as well as
>> have performed analysis using several MYSQL tuners in [2].
>>
>> (1) *max_connections : 1000* (The maximum permitted number of
>> simultaneous client connections.)
>>
>> (2) *join_buffer_size : 259968* (The minimum size of the buffer that is
>> used for plain index scans, range index scans, and joins that do not use
>> indexes and thus perform full table scans.)
>>
>> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
>> memory area where InnoDB caches table and index data)
>>
>> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
>> transactions that have not been committed yet)
>>
>> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
>> instances. According to the mysql documentation[1], on systems with a large
>> amount of memory, we can improve concurrency by dividing the buffer pool
>> into multiple buffer pool instances. But couldn't change since it's a read
>> only variable)
>>
>> (6) *key_buffer_size : 38400* (size of the buffer used for index
>> blocks)
>>
>> (7) *table_open_cache : 4000* (The number of open tables for all
>> threads)
>>
>> (8) *sort_buffer_size : 400* (Each session that must perform a sort
>> allocates a buffer of this size)
>>
>> (9) *read_buffer_size : 100* (Each thread that does a sequential
>> scan for a table allocates a buffer of this size for each table it scans.
>> If we do many sequential scans, we might want to increase this value)
>>
>> (10) *query_cache_type : 0 *
>>
>> (11) *query_cache_limit : 1048576* (Do not cache results that are larger
>> than this number of bytes)
>>
>> (12) *query_cache_size : 1048576* (The amount of memory allocated for
>> caching query results)
>>
>> (13) *thread_stack : 262144* (The stack size for each thread)
>>
>> (14) *net_buffer_length : 16384* (Each client thread is associated with
>> a connection buffer and result buffer. Both begin with a size given by
>> net_buffer_length but are dynamically enlarged up to max_allowed_packet
>> bytes as needed)
>>
>> (15) *max_allowed_packet : 4194304* (The maximum size of one packet or
>> any generated/intermediate string)
>>
>> (16) *thread_cache_size : 30* (no of threads the server should cache for
>> reuse)
>>
>>
>>
>> IS has been configured as follows to optimize the performance.
>>
>> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>>
>> *Xms : 2g *
>>
>> *Xmx : 2g *
>>
>> (2) Removed following entry from
>> /repository/conf/tomcat/catalina-server.xml to disable http access
>> logs.
>>
>> > directory="${carbon.home}/repository/logs" prefix="http_access_"
>> suffix=".log" pattern="combined" />
>>
>> (3) Tuned following parameters in axis2client.xml file.
>>
>> 1000
>>
>> 3
>>
>> (4) Added following additional parameters to optimize database connection
>> pool.
>>
>> 6
>>
>> 600
>>
>> 20
>>
>> (5) Tuning Tomcat parameters in
>> /repository/conf/tomcat/catalina-server.xml.
>>
>> *acceptorThreadCount = 8 *
>>
>> *maxThreads="750" *
>>
>> *minSpareThreads="150" *
>>
>> *maxKeepAliveRequests="600" *
>>
>> *acceptCount="600"*
>>
>>
>>
>> JMeter has been configured as follows to optimize the performance.
>>
>> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>>
>> *Xms : 1g *
>>
>> *Xmx : 1g *
>>
>>
>> We were able to optimize the environment up to some level. But*
>> currently the TPS is dropping from the initial TPS 1139.5/s to 198.1/s in
>> around 610 user count.(User Add)*
>>
>> Appreciate your help on 

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-31 Thread Malith Jayasinghe
HI Indunil,
Just a few question regarding this performance test you have done:

What is the reason for selecting the concurrency = 500 here?

Have you tested the behaviour for lower concurrency levels?

*"currently the TPS is dropping from the initial TPS 1139.5/s to 198.1/s in
around 610 user count.(User Add)" - *How did you notice/measure this
drop in TPS? Did you analyze the jmeter results offline? After it drops,
does it improve after some time or does it stay the same?

Did you look at the behaviour of latency?

Thanks

Malith


On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
indu...@wso2.com> wrote:

> Hi,
>
> We are currently engaged into a performance analysis where we are
> analyzing performance for User Add, Update, Authentication operations. The
> testing has been carried out in a following environment with 500
> concurrency and users up to 10 million.
>
> *Environment :*
>
> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
> MySQL 5.7
> Ubuntu 14.04
> Openldap-2.4.31
> IS 5.1.0
>
> In order to optimize the MYSQL server, following server parameters have
> been tuned accordingly. We have referred MYSQL documentation [1] as well as
> have performed analysis using several MYSQL tuners in [2].
>
> (1) *max_connections : 1000* (The maximum permitted number of
> simultaneous client connections.)
>
> (2) *join_buffer_size : 259968* (The minimum size of the buffer that is
> used for plain index scans, range index scans, and joins that do not use
> indexes and thus perform full table scans.)
>
> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
> memory area where InnoDB caches table and index data)
>
> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
> transactions that have not been committed yet)
>
> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
> instances. According to the mysql documentation[1], on systems with a large
> amount of memory, we can improve concurrency by dividing the buffer pool
> into multiple buffer pool instances. But couldn't change since it's a read
> only variable)
>
> (6) *key_buffer_size : 38400* (size of the buffer used for index
> blocks)
>
> (7) *table_open_cache : 4000* (The number of open tables for all threads)
>
> (8) *sort_buffer_size : 400* (Each session that must perform a sort
> allocates a buffer of this size)
>
> (9) *read_buffer_size : 100* (Each thread that does a sequential scan
> for a table allocates a buffer of this size for each table it scans. If we
> do many sequential scans, we might want to increase this value)
>
> (10) *query_cache_type : 0 *
>
> (11) *query_cache_limit : 1048576* (Do not cache results that are larger
> than this number of bytes)
>
> (12) *query_cache_size : 1048576* (The amount of memory allocated for
> caching query results)
>
> (13) *thread_stack : 262144* (The stack size for each thread)
>
> (14) *net_buffer_length : 16384* (Each client thread is associated with a
> connection buffer and result buffer. Both begin with a size given by
> net_buffer_length but are dynamically enlarged up to max_allowed_packet
> bytes as needed)
>
> (15) *max_allowed_packet : 4194304* (The maximum size of one packet or
> any generated/intermediate string)
>
> (16) *thread_cache_size : 30* (no of threads the server should cache for
> reuse)
>
>
>
> IS has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 2g *
>
> *Xmx : 2g *
>
> (2) Removed following entry from
> /repository/conf/tomcat/catalina-server.xml to disable http access
> logs.
>
>  directory="${carbon.home}/repository/logs" prefix="http_access_"
> suffix=".log" pattern="combined" />
>
> (3) Tuned following parameters in axis2client.xml file.
>
> 1000
>
> 3
>
> (4) Added following additional parameters to optimize database connection
> pool.
>
> 6
>
> 600
>
> 20
>
> (5) Tuning Tomcat parameters in
> /repository/conf/tomcat/catalina-server.xml.
>
> *acceptorThreadCount = 8 *
>
> *maxThreads="750" *
>
> *minSpareThreads="150" *
>
> *maxKeepAliveRequests="600" *
>
> *acceptCount="600"*
>
>
>
> JMeter has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 1g *
>
> *Xmx : 1g *
>
>
> We were able to optimize the environment up to some level. But* currently
> the TPS is dropping from the initial TPS 1139.5/s to 198.1/s in around
> 610 user count.(User Add)*
>
> Appreciate your help on figuring out whether we need to do any
> modifications to the optimizations in MYSQL, IS and JMeter servers or to
> identify the exact issue for this sudden TPS dropping.
>
> [1] http://dev.mysql.com/doc/refman/5.7/en/optimizing-server.html
>
> [2] http://www.askapache.com/mysql/mysql-performance-tuning.html
>
>
> Thanks and Regards
> --
> Indunil Upeksha Rathnayake
> Software Engineer | WSO2 Inc
> Emailindu...@wso2.com
> Mobile   0772182255
>



-- 
Malith Jayasinghe


WS

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-29 Thread Ishara Karunarathna
HI Indunil,

Today I did some changes to jmeter scripts Still testing locally, will
provide you those.

And before we use EC2 instances we had some openstack. I think its better
if we can run a backup test there as well.
If EC2  give any issues we can eliminate the risk.

@Chamath can we get those again ?

Thanks,
Ishara






On Fri, Jul 29, 2016 at 3:19 PM, Indunil Upeksha Rathnayake <
indu...@wso2.com> wrote:

> Hi,
>
> I have attached the JMeter Script file which we use in adding users[1].
> May be we need to do some modifications to the script. Appreciate your
> comments.
> @Ishara: I'll send those results.
>
> [1]
> https://drive.google.com/a/wso2.com/folderview?id=0Bz_EQkE2mOgBMmFDNzFpNk5CTFE&usp=sharing
>
> On Fri, Jul 29, 2016 at 3:05 PM, Ishara Karunarathna 
> wrote:
>
>> Hi Indunil,
>>
>> Can we get the distribution of the throughput then we can figure out how
>> its coming down
>> and better if we can get the resource utilization of servers.
>>
>> Thanks,
>> Ishara
>>
>> On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
>> indu...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> We are currently engaged into a performance analysis where we are
>>> analyzing performance for User Add, Update, Authentication operations. The
>>> testing has been carried out in a following environment with 500
>>> concurrency and users up to 10 million.
>>>
>>> *Environment :*
>>>
>>> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
>>> MySQL 5.7
>>> Ubuntu 14.04
>>> Openldap-2.4.31
>>> IS 5.1.0
>>>
>>> In order to optimize the MYSQL server, following server parameters have
>>> been tuned accordingly. We have referred MYSQL documentation [1] as well as
>>> have performed analysis using several MYSQL tuners in [2].
>>>
>>> (1) *max_connections : 1000* (The maximum permitted number of
>>> simultaneous client connections.)
>>>
>>> (2) *join_buffer_size : 259968* (The minimum size of the buffer that is
>>> used for plain index scans, range index scans, and joins that do not use
>>> indexes and thus perform full table scans.)
>>>
>>> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
>>> memory area where InnoDB caches table and index data)
>>>
>>> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
>>> transactions that have not been committed yet)
>>>
>>> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
>>> instances. According to the mysql documentation[1], on systems with a large
>>> amount of memory, we can improve concurrency by dividing the buffer pool
>>> into multiple buffer pool instances. But couldn't change since it's a read
>>> only variable)
>>>
>>> (6) *key_buffer_size : 38400* (size of the buffer used for index
>>> blocks)
>>>
>>> (7) *table_open_cache : 4000* (The number of open tables for all
>>> threads)
>>>
>>> (8) *sort_buffer_size : 400* (Each session that must perform a sort
>>> allocates a buffer of this size)
>>>
>>> (9) *read_buffer_size : 100* (Each thread that does a sequential
>>> scan for a table allocates a buffer of this size for each table it scans.
>>> If we do many sequential scans, we might want to increase this value)
>>>
>>> (10) *query_cache_type : 0 *
>>>
>>> (11) *query_cache_limit : 1048576* (Do not cache results that are
>>> larger than this number of bytes)
>>>
>>> (12) *query_cache_size : 1048576* (The amount of memory allocated for
>>> caching query results)
>>>
>>> (13) *thread_stack : 262144* (The stack size for each thread)
>>>
>>> (14) *net_buffer_length : 16384* (Each client thread is associated with
>>> a connection buffer and result buffer. Both begin with a size given by
>>> net_buffer_length but are dynamically enlarged up to max_allowed_packet
>>> bytes as needed)
>>>
>>> (15) *max_allowed_packet : 4194304* (The maximum size of one packet or
>>> any generated/intermediate string)
>>>
>>> (16) *thread_cache_size : 30* (no of threads the server should cache
>>> for reuse)
>>>
>>>
>>>
>>> IS has been configured as follows to optimize the performance.
>>>
>>> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>>>
>>> *Xms : 2g *
>>>
>>> *Xmx : 2g *
>>>
>>> (2) Removed following entry from
>>> /repository/conf/tomcat/catalina-server.xml to disable http access
>>> logs.
>>>
>>> >> directory="${carbon.home}/repository/logs" prefix="http_access_"
>>> suffix=".log" pattern="combined" />
>>>
>>> (3) Tuned following parameters in axis2client.xml file.
>>>
>>> 1000
>>>
>>> 3
>>>
>>> (4) Added following additional parameters to optimize database
>>> connection pool.
>>>
>>> 6
>>>
>>> 600
>>>
>>> 20
>>>
>>> (5) Tuning Tomcat parameters in
>>> /repository/conf/tomcat/catalina-server.xml.
>>>
>>> *acceptorThreadCount = 8 *
>>>
>>> *maxThreads="750" *
>>>
>>> *minSpareThreads="150" *
>>>
>>> *maxKeepAliveRequests="600" *
>>>
>>> *acceptCount="600"*
>>>
>>>
>>>
>>> JMeter has been configured as follows to optimize the performance.
>>>
>>> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>>>
>>> *Xms : 1g *

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-29 Thread Indunil Upeksha Rathnayake
Hi,

I have attached the JMeter Script file which we use in adding users[1]. May
be we need to do some modifications to the script. Appreciate your
comments.
@Ishara: I'll send those results.

[1]
https://drive.google.com/a/wso2.com/folderview?id=0Bz_EQkE2mOgBMmFDNzFpNk5CTFE&usp=sharing

On Fri, Jul 29, 2016 at 3:05 PM, Ishara Karunarathna 
wrote:

> Hi Indunil,
>
> Can we get the distribution of the throughput then we can figure out how
> its coming down
> and better if we can get the resource utilization of servers.
>
> Thanks,
> Ishara
>
> On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
> indu...@wso2.com> wrote:
>
>> Hi,
>>
>> We are currently engaged into a performance analysis where we are
>> analyzing performance for User Add, Update, Authentication operations. The
>> testing has been carried out in a following environment with 500
>> concurrency and users up to 10 million.
>>
>> *Environment :*
>>
>> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
>> MySQL 5.7
>> Ubuntu 14.04
>> Openldap-2.4.31
>> IS 5.1.0
>>
>> In order to optimize the MYSQL server, following server parameters have
>> been tuned accordingly. We have referred MYSQL documentation [1] as well as
>> have performed analysis using several MYSQL tuners in [2].
>>
>> (1) *max_connections : 1000* (The maximum permitted number of
>> simultaneous client connections.)
>>
>> (2) *join_buffer_size : 259968* (The minimum size of the buffer that is
>> used for plain index scans, range index scans, and joins that do not use
>> indexes and thus perform full table scans.)
>>
>> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
>> memory area where InnoDB caches table and index data)
>>
>> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
>> transactions that have not been committed yet)
>>
>> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
>> instances. According to the mysql documentation[1], on systems with a large
>> amount of memory, we can improve concurrency by dividing the buffer pool
>> into multiple buffer pool instances. But couldn't change since it's a read
>> only variable)
>>
>> (6) *key_buffer_size : 38400* (size of the buffer used for index
>> blocks)
>>
>> (7) *table_open_cache : 4000* (The number of open tables for all
>> threads)
>>
>> (8) *sort_buffer_size : 400* (Each session that must perform a sort
>> allocates a buffer of this size)
>>
>> (9) *read_buffer_size : 100* (Each thread that does a sequential
>> scan for a table allocates a buffer of this size for each table it scans.
>> If we do many sequential scans, we might want to increase this value)
>>
>> (10) *query_cache_type : 0 *
>>
>> (11) *query_cache_limit : 1048576* (Do not cache results that are larger
>> than this number of bytes)
>>
>> (12) *query_cache_size : 1048576* (The amount of memory allocated for
>> caching query results)
>>
>> (13) *thread_stack : 262144* (The stack size for each thread)
>>
>> (14) *net_buffer_length : 16384* (Each client thread is associated with
>> a connection buffer and result buffer. Both begin with a size given by
>> net_buffer_length but are dynamically enlarged up to max_allowed_packet
>> bytes as needed)
>>
>> (15) *max_allowed_packet : 4194304* (The maximum size of one packet or
>> any generated/intermediate string)
>>
>> (16) *thread_cache_size : 30* (no of threads the server should cache for
>> reuse)
>>
>>
>>
>> IS has been configured as follows to optimize the performance.
>>
>> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>>
>> *Xms : 2g *
>>
>> *Xmx : 2g *
>>
>> (2) Removed following entry from
>> /repository/conf/tomcat/catalina-server.xml to disable http access
>> logs.
>>
>> > directory="${carbon.home}/repository/logs" prefix="http_access_"
>> suffix=".log" pattern="combined" />
>>
>> (3) Tuned following parameters in axis2client.xml file.
>>
>> 1000
>>
>> 3
>>
>> (4) Added following additional parameters to optimize database connection
>> pool.
>>
>> 6
>>
>> 600
>>
>> 20
>>
>> (5) Tuning Tomcat parameters in
>> /repository/conf/tomcat/catalina-server.xml.
>>
>> *acceptorThreadCount = 8 *
>>
>> *maxThreads="750" *
>>
>> *minSpareThreads="150" *
>>
>> *maxKeepAliveRequests="600" *
>>
>> *acceptCount="600"*
>>
>>
>>
>> JMeter has been configured as follows to optimize the performance.
>>
>> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>>
>> *Xms : 1g *
>>
>> *Xmx : 1g *
>>
>>
>> We were able to optimize the environment up to some level. But*
>> currently the TPS is dropping from the initial TPS 1139.5/s to 198.1/s in
>> around 610 user count.(User Add)*
>>
>> Appreciate your help on figuring out whether we need to do any
>> modifications to the optimizations in MYSQL, IS and JMeter servers or to
>> identify the exact issue for this sudden TPS dropping.
>>
>> [1] http://dev.mysql.com/doc/refman/5.7/en/optimizing-server.html
>>
>> [2] http://www.askapache.com/mysql/mysql-performance-tuning.html
>>
>>
>> Thanks and R

Re: [Dev] [IS] EC2 Performance Analysis : Sudden TPS drop in User Add in 500 concurrency with 10million users

2016-07-29 Thread Ishara Karunarathna
Hi Indunil,

Can we get the distribution of the throughput then we can figure out how
its coming down
and better if we can get the resource utilization of servers.

Thanks,
Ishara

On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
indu...@wso2.com> wrote:

> Hi,
>
> We are currently engaged into a performance analysis where we are
> analyzing performance for User Add, Update, Authentication operations. The
> testing has been carried out in a following environment with 500
> concurrency and users up to 10 million.
>
> *Environment :*
>
> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
> MySQL 5.7
> Ubuntu 14.04
> Openldap-2.4.31
> IS 5.1.0
>
> In order to optimize the MYSQL server, following server parameters have
> been tuned accordingly. We have referred MYSQL documentation [1] as well as
> have performed analysis using several MYSQL tuners in [2].
>
> (1) *max_connections : 1000* (The maximum permitted number of
> simultaneous client connections.)
>
> (2) *join_buffer_size : 259968* (The minimum size of the buffer that is
> used for plain index scans, range index scans, and joins that do not use
> indexes and thus perform full table scans.)
>
> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
> memory area where InnoDB caches table and index data)
>
> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
> transactions that have not been committed yet)
>
> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
> instances. According to the mysql documentation[1], on systems with a large
> amount of memory, we can improve concurrency by dividing the buffer pool
> into multiple buffer pool instances. But couldn't change since it's a read
> only variable)
>
> (6) *key_buffer_size : 38400* (size of the buffer used for index
> blocks)
>
> (7) *table_open_cache : 4000* (The number of open tables for all threads)
>
> (8) *sort_buffer_size : 400* (Each session that must perform a sort
> allocates a buffer of this size)
>
> (9) *read_buffer_size : 100* (Each thread that does a sequential scan
> for a table allocates a buffer of this size for each table it scans. If we
> do many sequential scans, we might want to increase this value)
>
> (10) *query_cache_type : 0 *
>
> (11) *query_cache_limit : 1048576* (Do not cache results that are larger
> than this number of bytes)
>
> (12) *query_cache_size : 1048576* (The amount of memory allocated for
> caching query results)
>
> (13) *thread_stack : 262144* (The stack size for each thread)
>
> (14) *net_buffer_length : 16384* (Each client thread is associated with a
> connection buffer and result buffer. Both begin with a size given by
> net_buffer_length but are dynamically enlarged up to max_allowed_packet
> bytes as needed)
>
> (15) *max_allowed_packet : 4194304* (The maximum size of one packet or
> any generated/intermediate string)
>
> (16) *thread_cache_size : 30* (no of threads the server should cache for
> reuse)
>
>
>
> IS has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 2g *
>
> *Xmx : 2g *
>
> (2) Removed following entry from
> /repository/conf/tomcat/catalina-server.xml to disable http access
> logs.
>
>  directory="${carbon.home}/repository/logs" prefix="http_access_"
> suffix=".log" pattern="combined" />
>
> (3) Tuned following parameters in axis2client.xml file.
>
> 1000
>
> 3
>
> (4) Added following additional parameters to optimize database connection
> pool.
>
> 6
>
> 600
>
> 20
>
> (5) Tuning Tomcat parameters in
> /repository/conf/tomcat/catalina-server.xml.
>
> *acceptorThreadCount = 8 *
>
> *maxThreads="750" *
>
> *minSpareThreads="150" *
>
> *maxKeepAliveRequests="600" *
>
> *acceptCount="600"*
>
>
>
> JMeter has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 1g *
>
> *Xmx : 1g *
>
>
> We were able to optimize the environment up to some level. But* currently
> the TPS is dropping from the initial TPS 1139.5/s to 198.1/s in around
> 610 user count.(User Add)*
>
> Appreciate your help on figuring out whether we need to do any
> modifications to the optimizations in MYSQL, IS and JMeter servers or to
> identify the exact issue for this sudden TPS dropping.
>
> [1] http://dev.mysql.com/doc/refman/5.7/en/optimizing-server.html
>
> [2] http://www.askapache.com/mysql/mysql-performance-tuning.html
>
>
> Thanks and Regards
> --
> Indunil Upeksha Rathnayake
> Software Engineer | WSO2 Inc
> Emailindu...@wso2.com
> Mobile   0772182255
>



-- 
Ishara Karunarathna
Associate Technical Lead
WSO2 Inc. - lean . enterprise . middleware |  wso2.com

email: isha...@wso2.com,   blog: isharaaruna.blogspot.com,   mobile:
+94717996791
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev