HI Indunil,
Just a few question regarding this performance test you have done:

What is the reason for selecting the concurrency = 500 here?

Have you tested the behaviour for lower concurrency levels?

*"currently the TPS is dropping from the initial TPS 1139.5/s to 198.1/s in
around 6100000 user count.(User Add)" - *How did you notice/measure this
drop in TPS? Did you analyze the jmeter results offline? After it drops,
does it improve after some time or does it stay the same?

Did you look at the behaviour of latency?

Thanks

Malith


On Fri, Jul 29, 2016 at 2:57 PM, Indunil Upeksha Rathnayake <
indu...@wso2.com> wrote:

> Hi,
>
> We are currently engaged into a performance analysis where we are
> analyzing performance for User Add, Update, Authentication operations. The
> testing has been carried out in a following environment with 500
> concurrency and users up to 10 million.
>
> *Environment :*
>
> m3.2xlarge ( 8 core, 30GB, SSD 2x80 GB) 3 instances.
> MySQL 5.7
> Ubuntu 14.04
> Openldap-2.4.31
> IS 5.1.0
>
> In order to optimize the MYSQL server, following server parameters have
> been tuned accordingly. We have referred MYSQL documentation [1] as well as
> have performed analysis using several MYSQL tuners in [2].
>
> (1) *max_connections : 1000* (The maximum permitted number of
> simultaneous client connections.)
>
> (2) *join_buffer_size : 259968* (The minimum size of the buffer that is
> used for plain index scans, range index scans, and joins that do not use
> indexes and thus perform full table scans.)
>
> (3) *innodb_buffer_pool_size : 5207959552 <5207959552>* (size of the
> memory area where InnoDB caches table and index data)
>
> (4) *innodb_log_buffer_size : 16777216* (size of the buffer for
> transactions that have not been committed yet)
>
> (5) *innodb_buffer_pool_instances : 1* (The number of buffer pool
> instances. According to the mysql documentation[1], on systems with a large
> amount of memory, we can improve concurrency by dividing the buffer pool
> into multiple buffer pool instances. But couldn't change since it's a read
> only variable)
>
> (6) *key_buffer_size : 384000000* (size of the buffer used for index
> blocks)
>
> (7) *table_open_cache : 4000* (The number of open tables for all threads)
>
> (8) *sort_buffer_size : 4000000* (Each session that must perform a sort
> allocates a buffer of this size)
>
> (9) *read_buffer_size : 1000000* (Each thread that does a sequential scan
> for a table allocates a buffer of this size for each table it scans. If we
> do many sequential scans, we might want to increase this value)
>
> (10) *query_cache_type : 0 *
>
> (11) *query_cache_limit : 1048576* (Do not cache results that are larger
> than this number of bytes)
>
> (12) *query_cache_size : 1048576* (The amount of memory allocated for
> caching query results)
>
> (13) *thread_stack : 262144* (The stack size for each thread)
>
> (14) *net_buffer_length : 16384* (Each client thread is associated with a
> connection buffer and result buffer. Both begin with a size given by
> net_buffer_length but are dynamically enlarged up to max_allowed_packet
> bytes as needed)
>
> (15) *max_allowed_packet : 4194304* (The maximum size of one packet or
> any generated/intermediate string)
>
> (16) *thread_cache_size : 30* (no of threads the server should cache for
> reuse)
>
>
>
> IS has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 2g *
>
> *Xmx : 2g *
>
> (2) Removed following entry from
> <IS_HOME>/repository/conf/tomcat/catalina-server.xml to disable http access
> logs.
>
> <Valve className="org.apache.catalina.valves.AccessLogValve"
> directory="${carbon.home}/repository/logs" prefix="http_access_"
> suffix=".log" pattern="combined" />
>
> (3) Tuned following parameters in axis2client.xml file.
>
> <parameter name="*defaultMaxConnPerHost*">1000</parameter>
>
> <parameter name="*maxTotalConnections*">30000</parameter>
>
> (4) Added following additional parameters to optimize database connection
> pool.
>
> <Property name="*maxWait*">60000</Property>
>
> <Property name="*maxActive*">600</Property>
>
> <Property name="*initialSize*">20</Property>
>
> (5) Tuning Tomcat parameters in
> <IS_HOME>/repository/conf/tomcat/catalina-server.xml.
>
> *acceptorThreadCount = 8 *
>
> *maxThreads="750" *
>
> *minSpareThreads="150" *
>
> *maxKeepAliveRequests="600" *
>
> *acceptCount="600"*
>
>
>
> JMeter has been configured as follows to optimize the performance.
>
> (1) JVM Heap Settings (-Xms -Xmx) changed as follows:
>
> *Xms : 1g *
>
> *Xmx : 1g *
>
>
> We were able to optimize the environment up to some level. But* currently
> the TPS is dropping from the initial TPS 1139.5/s to 198.1/s in around
> 6100000 user count.(User Add)*
>
> Appreciate your help on figuring out whether we need to do any
> modifications to the optimizations in MYSQL, IS and JMeter servers or to
> identify the exact issue for this sudden TPS dropping.
>
> [1] http://dev.mysql.com/doc/refman/5.7/en/optimizing-server.html
>
> [2] http://www.askapache.com/mysql/mysql-performance-tuning.html
>
>
> Thanks and Regards
> --
> Indunil Upeksha Rathnayake
> Software Engineer | WSO2 Inc
> Email    indu...@wso2.com
> Mobile   0772182255
>



-- 
Malith Jayasinghe


WSO2, Inc. (http://wso2.com)
Email   : mali...@wso2.com
Mobile : 0770704040
Lean . Enterprise . Middleware
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to