"Consistent" means no "bumps" on response intervals.

The number 72: I changed the containerThreads to be 75. I don't know how 
Thread.activeCount() counts, but it does not change. I will have to find it out.



----------------------------------------
> Date: Mon, 28 Jan 2008 14:28:17 -0600
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: [email protected]
> Subject: Re: [gt-user] WS Core containerThreads not working
> 
> Hi Sam,
> See below:
> 
> Samuel LIU wrote:
>> Just got back.
>>
>> In my test, each client is a java process, not thread. So each client needs 
>> to initiate client stubs. 
> That is OK, but make sure you re-use the client stubs once they are created.
>> Based on your suggestion, I changed my test using java threads. then I saw 
>> consistent responses from server. 
> Can you define consistent? 
>> However, the number of threads at server side did not change regardless how 
>> many simultaneous threads I created. The number is 72.
>>   
> That sounds like a lot of threads, certainly more than I usually see 
> active in a container.  The number of threads in my case increases and 
> decreases from some 30 to 50 based on load when using GT4.0.4 Java 
> WS-Core. 
>> Whether to use process or thread to do this test is based on the realistic 
>> environment that this test wants to simulate. 
> Right, but if you reuse the client stubs, you are only being penalized 
> for the first message, afterwhich things should run much quicker.
>> I think process is the correct way to go. That is, we have to assume that 
>> each JVM is on a different machine. 
> That is OK, actually much better, as the client side is quite expensive, 
> and there would be little benefit beyond a few threads per CPU.
>> I still don't understand why using java processes caused so much delay at 
>> server side.
>>   
> Can you be more specific?  What is "much delay"?
>> BTW, I did not draw any conclusion saying that only 2 threads were there at 
>> server processing requests. I posted it as a question.
>>   
> Aha, ok. 
> 
> Can you do a little experiment that will for sure give you the number of 
> threads that are servicing the GT4 container?  Implement the simplest WS 
> that has a single function, say "sleep" which takes an argument a long 
> value.  Now, invoke from the client, the "sleep 60000", which should 
> make the service side sleep for 60 seconds upon receiving this call.  
> Now, start 50 parallel clients (in separate processes or separate 
> threads, it doesn't matter), and send the sleep 60000 WS call as fast as 
> possible from all clients.  You should print out a statement right 
> before each send so you know that all 50 client requests were submitted, 
> and then you should print out a message saying that the WS call was 
> successfully sent.  Now, assuming that you have X threads servicing WS 
> calls on the GT4 container, you will see exactly X print statements that 
> the sleep 60000 was completed roughly 60 seconds after they are 
> submitted.  After 120 seconds, you will see another X print statements, 
> and so on... until you either reach some timeouts, or all 50 requests 
> are completed.  That value X is the number of threads that are servicing 
> the GT4 web service requests.  That number X should also correspond to 
> the number listed in your config file for the GT4 container. 
> 
> Other than doing this experiment, the only other thing I can suggest is 
> to run some tests showing the performance and load on the service, and 
> send us the logs or graphs so we can take a look at them. 
> 
> Good luck,
> Ioan
> 
>> Thanks,
>> Sam.
>>
>> ----------------------------------------
>>   
>>> Date: Wed, 23 Jan 2008 17:35:41 -0600
>>> From: [EMAIL PROTECTED]
>>> To: [EMAIL PROTECTED]
>>> CC: [email protected]
>>> Subject: Re: [gt-user] WS Core containerThreads not working
>>>
>>> I don't see how you came to the conclusion that only 2 service threads 
>>> are running in the container.  Also, the performance will also depend on 
>>> whether or not you are reusing port stubs, as the first invocation is 
>>> extremely expensive as opposed to subsequent invocations.  Here are some 
>>> throughput numbers I obtained  a while back using GT4.0.4 on a dual Xeon 
>>> 3GHz with HT and 32 client nodes and 20 threads at the container:
>>>
>>>     * no security: 500 WS calls per sec
>>>     * GSISecureConversation: 204 WS calls per sec
>>>     * GSITransport: 65 WS calls per sec
>>>
>>> There are also different settings for each security mechanism, such as 
>>> encryption and integrity checks, which can affect performance.  The 
>>> number above (for the 2 security cases) are with encryption.
>>>
>>> I heard that GT4.1.x and GT4.2.x are using persistent sockets, which 
>>> hurt the performance of the GSITransport.  I expect the GSITransport to 
>>> be significantly better in the latest GT version.  Another thing to 
>>> remember is that these numbers reflect the re-use of port stubs on the 
>>> client side.  Without re-using the port stubs, my intuition says that 
>>> the throughput would be one to two orders of magnitude lower, but I 
>>> haven't actually measured it to know for sure.  
>>>
>>> In conclusion, you think that you only have 2 threads servicing the WS 
>>> calls at the GT4 container.  I believe your tests are inconclusive based 
>>> on what you described so far.  Can you be more specific on the tests and 
>>> results you have done, which leads you to believe that there are only 2 
>>> threads (when in fact you have set it higher)?
>>>
>>> Ioan
>>>  
>>>
>>> Samuel LIU wrote:
>>>     
>>>> 0.8sec is in -nosec scenario. Yes, in this scenario, I could not infer the 
>>>> 2 container thread limit. However, when I enabled signing and encryption 
>>>> on the communication channel (GSI Transport) or messages (GSI Secure 
>>>> Conversation), I could see the obvious delays among request processing at 
>>>> server side: 2 responses at a time and interleaved by about 10 seconds. 
>>>> Client side overhead was ruled out by comparing starting timestamps.
>>>>
>>>> My purpose of conducting this test is to measure service invocation 
>>>> overhead given different security options: no security, channel level, or 
>>>> message level. The performance of my service code has been tested 
>>>> separately.
>>>>
>>>> Thanks,
>>>> Sam.
>>>>
>>>> ----------------------------------------
>>>>   
>>>>       
>>>>> Date: Wed, 23 Jan 2008 14:39:06 -0600
>>>>> From: [EMAIL PROTECTED]
>>>>> To: [EMAIL PROTECTED]
>>>>> CC: [email protected]
>>>>> Subject: Re: [gt-user] WS Core containerThreads not working
>>>>>
>>>>> If you have such short requests that they can be handled in 0.8 sec, 
>>>>> then how do you know that there are only 2 at a time being serviced? 
>>>>>
>>>>> Here is the test I would do to see how many threads are really running. 
>>>>>
>>>>> In the service code, add a sleep 60 sec call... and on the client, start 
>>>>> as many threads, which is hopefully larger than what you set as a max on 
>>>>> the GT4 container, something on the order of O(100s), and invoke the 
>>>>> service as fast as possible from all client threads.  If for example you 
>>>>> had a max of 50 threads set on the GT4 container, you would see after 60 
>>>>> seconds that 50 invocations were completed successfully, then after 
>>>>> another 60 sec, another 50 invocations, etc... if indeed you only have 2 
>>>>> threads, then you would see only 2 at a time every 60 sec. 
>>>>>
>>>>> Now, about seeing performance difference in your own code (without the 
>>>>> sleep 60) as you increase the number of threads, it all depends on your 
>>>>> specific code and what the client/service are doing.  Also, from my 
>>>>> experience, the client side is generally more heavy weight that the 
>>>>> service side... which means that you will need multiple concurrent 
>>>>> clients to generally saturate a given service. 
>>>>>
>>>>> Ioan
>>>>>
>>>>> Samuel LIU wrote:
>>>>>     
>>>>>         
>>>>>> I am trying to test the service response time of a Grid service I wrote 
>>>>>> given different request rate. By default, WS Core sets initial thread 
>>>>>> number to be 2. When I started globus container w/ -nosec, I could not 
>>>>>> see much difference. All requests were served within 0.8 sec. However, 
>>>>>> when I started container with GSI Transport or GSI Secure Conversation 
>>>>>> settings, I could see clearly two requests were served at a time.
>>>>>>
>>>>>> The problem is: based on gt4 doc, when I changed thread settings to:
>>>>>>
>>>>>>         
>>>>>>         
>>>>>>         
>>>>>>
>>>>>> and restarted container, I saw no difference: still 2 requests were 
>>>>>> accepted at one time, then another 2, ...
>>>>>>
>>>>>> Can anyone suggest what went wrong? I am using globus 4.0.5 on a CentOS 
>>>>>> machine. 
>>>>>>
>>>>>> Thanks,
>>>>>> Sam.
>>>>>> _________________________________________________________________
>>>>>> Climb to the top of the charts! Play the word scramble challenge with 
>>>>>> star power.
>>>>>> http://club.live.com/star_shuffle.aspx?icid=starshuffle_wlmailtextlink_jan
>>>>>>
>>>>>>   
>>>>>>       
>>>>>>           
>>>>> -- 
>>>>> ==================================================
>>>>> Ioan Raicu
>>>>> Ph.D. Candidate
>>>>> ==================================================
>>>>> Distributed Systems Laboratory
>>>>> Computer Science Department
>>>>> University of Chicago
>>>>> 1100 E. 58th Street, Ryerson Hall
>>>>> Chicago, IL 60637
>>>>> ==================================================
>>>>> Email: [EMAIL PROTECTED]
>>>>> Web:   http://www.cs.uchicago.edu/~iraicu
>>>>> http://dev.globus.org/wiki/Incubator/Falkon
>>>>> http://www.ci.uchicago.edu/wiki/bin/view/VDS/DslCS
>>>>> ==================================================
>>>>> ==================================================
>>>>>
>>>>>
>>>>>     
>>>>>         
>>>> _________________________________________________________________
>>>> Need to know the score, the latest news, or you need your HotmailĀ®-get 
>>>> your "fix".
>>>> http://www.msnmobilefix.com/Default.aspx
>>>>   
>>>>       
>>
>> _________________________________________________________________
>> Need to know the score, the latest news, or you need your HotmailĀ®-get your 
>> "fix".
>> http://www.msnmobilefix.com/Default.aspx
>>   
> 
> -- 
> ==================================================
> Ioan Raicu
> Ph.D. Candidate
> ==================================================
> Distributed Systems Laboratory
> Computer Science Department
> University of Chicago
> 1100 E. 58th Street, Ryerson Hall
> Chicago, IL 60637
> ==================================================
> Email: [EMAIL PROTECTED]
> Web:   http://www.cs.uchicago.edu/~iraicu
> http://dev.globus.org/wiki/Incubator/Falkon
> http://www.ci.uchicago.edu/wiki/bin/view/VDS/DslCS
> ==================================================
> ==================================================
> 
> 

_________________________________________________________________
Need to know the score, the latest news, or you need your HotmailĀ®-get your 
"fix".
http://www.msnmobilefix.com/Default.aspx

Reply via email to