Thanks for the details Lee/ Michael

I am referring to HTTP App-Server ML REST API.

The tests we are trying to perform is for 400 concurrent users using JMeter. In 
the Ramp-up time for 15 minutes we see no issues in the performance.

As soon as the 400 concurrent users are up, we see the threads reaching 32 on 
the App-server but the actual GET requests are 3-4. I understood the thread 
pool concept now but still I have few questions unanswered.

1. when active threads are 3-4 and ML admin console shows all the requests are 
being processed less than 2 seconds, we get the performance report with 90% as 
5-6 seconds.
2. latency in amazon cloud watch is increasing to 25-30 seconds when 400 users 
are active.
3. direct request (hitting ML REST End point) will take more time (once again 
wait time is 4-5 seconds while the actual query execution is less than as 
second - tracked using browser network details)
4. when configuration is changed i.e. updating threads to 32 to 75, while 
running test the thread count reaches 52 and the performance report looks very 
good (i.e. 90% is less than 0.5 seconds)

It's not a custom written code, we are trying to use default REST end point 
(results and facets). I Can see that actual query execution is always fast in 
MarkLogic (as expected) but not sure why there is a huge latency when I see in 
Cloudwatch and performance report is good when we increase threads to 75 but 
not good when its 32.

Thanks in advance for details :)

Regards,
Gnana(GP)

-----
Date: Fri, 9 Jan 2015 19:31:44 +0000
From: David Lee <david....@marklogic.com>
Subject: Re: [MarkLogic Dev General] MarkLogic Concurrent
        Threads/latency
To: MarkLogic Developer Discussion <general@developer.marklogic.com>
Message-ID:
        <6ad72d76c2d6f04d8be471b70d4b991e04e90...@exchg10-be02.marklogic.com>
Content-Type: text/plain; charset="us-ascii"

Two issues:

A)     What is actually happening.

B)      How it is being measured/reported


A)     What happens
In the app server you configure the max number of threads (max threads) typical 
32 per host.
The min is 1 (you don't see that).  The server starts threads as needed up to 
the maximum and allows them to live For a while even if idle so they can be 
reused.
These threads are in a thread pool waiting for requests , so the OS sees them 
as live threads in a blocked/idle state.
A master thread per app is listening on the socket (also blocked/idle).
A request comes in and the master thread wakes up and dispatches it to the pool.
An available thread reads the request, processes it and puts the socket in the 
pool of active connections, then  itself back to the pool.
Asyncronously a cleanup task closes sockets that are still open past their 
timeout (The request thread doesn't block on this).

A typical one-time request from an HTTP client that isn't using socket pools or 
session cookies but does use a authentication that requires a round trip (e.g. 
digest), does 2 requests sequentially.
Say a GET.

GET 1 [ returns with 401 unauthorized ]

? Fill in digest info
GET 2 [ processes data ]

At most the system will simultaneously process the max thread count (say 32).
This is different from the number of open connections or pending connections.





B)      How its measured
Depending on how the measurement is done (point snapshot, averaging per sec, 
cumulative ...) you will see different metrics.
Depending on the load of the system and other factors it may or may not be 'the 
same thread' that handled both requests, the metrics You don't mention which 
api or tool your using to measure the thread counts.  Assuming your r 
xdmp:server-status, The metrics per host are "threads" (number of threads in 
the pool currently) and "max-threads"  the maximum allowed in the pool.
Since the threads are kept alive for a while after a request they may increase 
to at least the number of simultaneous HTTP requests
Processed in the recent past.     That doesn't mean they are all running at 
that moment.

If you run requests sequentially very quickly you will likely see 2 threads 
because the first request thread has completed The request but not yet back in 
serviceable mode before your next request shows up so a different thread is 
used.
On average only 1 thread is actually running per simultaneous request but there 
is a small window that may start 2nd thread.
After that they both will wait a while for new requests.
You would observe the thread count returning to 1 after while of no activity if 
your query is run on a different app server then what your are measuring.
Eg. Run
    xdmp:server-status(xdmp:host(),xdmp:server("HealthCheck"))
will show threads=1
but
xdmp:server-status(xdmp:host(),xdmp:server())
will show atleast 2 probably 3.









From: general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] On Behalf Of 
gnanaprakash.bodire...@cognizant.com
Sent: Friday, January 09, 2015 6:17 AM
To: general@developer.marklogic.com
Subject: [MarkLogic Dev General] MarkLogic Concurrent Threads/latency

Hi

I want to understand how threads in app servers work.

I believe for every request with authentication MarkLogic will show 2 threads. 
But what I see is when we are doing performance testing, MarkLogic is showing 
32 threads are being used per node (we have 3 nodes in a cluster) but active 
thread (requests/updates) as 3-4.

What's the difference between Threads and requests/updates. I understand that 
requests are GET and updates are updating any content (PUT/POST/internal update 
in GET linked code), the main question is why thread count is more than sum of 
requests and updates.

If threads are sockets being opened why they are not getting closed even when 
the keep alive is just 5 seconds in configuration.

Can someone help me in understanding this and resolving the latency issue I am 
facing in my performance testing.

Regards,
Gnanaprakash Bodireddy



This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
http://developer.marklogic.com/pipermail/general/attachments/20150109/017863ad/attachment.html

------------------------------

_______________________________________________
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


End of General Digest, Vol 127, Issue 24
****************************************
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
_______________________________________________
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general

Reply via email to