Mark,

On 3/24/14, 5:37 AM, Mark Thomas wrote:
> On 24/03/2014 00:50, Christopher Schultz wrote:
>> Mark,
> 
>> On 3/23/14, 6:12 PM, Mark Thomas wrote:
>>> On 23/03/2014 22:07, Christopher Schultz wrote:
>>>> Mark,
>>>
>>>> On 2/27/14, 12:56 PM, Christopher Schultz wrote:
>>>>> Mark,
>>>>>
>>>>> On 2/25/14, 3:31 AM, Mark Thomas wrote:
>>>>>> On 25/02/2014 06:03, Christopher Schultz wrote:
>>>>>>> All,
>>>>>>
>>>>>>> I'm looking at the comparison table at the bottom of the 
>>>>>>> HTTP connectors page, and I have a few questions about
>>>>>>> it.
>>>>>>
>>>>>>> First, what does "Polling size" mean?
>>>>>>
>>>>>> Maximum number of connections in the poller. I'd simply
>>>>>> remove it from the table. It doesn't add anything.
>>>>>
>>>>> Okay, thanks.
>>>>>
>>>>>>> Second, under the NIO connector, both "Read HTTP Body"
>>>>>>> and "Write HTTP Response" say that they are
>>>>>>> "sim-Blocking"... does that mean that the API itself is
>>>>>>> stream-based (i.e. blocking) but that the actual
>>>>>>> under-the-covers behavior is to use non-blocking I/O?
>>>>>>
>>>>>> It means simulated blocking. The low level writes use a 
>>>>>> non-blocking API but blocking is simulated by not returning
>>>>>> to the caller until the write completes.
>>>>>
>>>>> That's what I was thinking. Thanks for confirming.
>>>
>>>> Another quick question: during the sim-blocking for reading the
>>>>  request-body, does the request go back into the poller queue,
>>>> or does it just sit waiting single-threaded-style? I would
>>>> assume that latter, otherwise we'd either violate the spec (one
>>>> thread serves the whole request) or spend a lot of resources
>>>> making sure we got the same thread back, etc.
>>>
>>> Both.
>>>
>>> The socket gets added to the BlockPoller and the thread waits on
>>> a latch for the BlockPoller to data can be read.
> 
>> Okay, but it's still one-thread-one-request... /The/ thread will
>> stay with that request until its complete, right? The BlockPoller
>> will just wake-up the same waiting thread.. no funny-business? ;)
> 
> Correct.
> 
>> Okay, one more related question: for the BIO connector, does the 
>> request/connection go back into any kind of queue after the
>> initial (keep-alive) request has completed, or does the thread that
>> has already processed the first request on the connection keep
>> going until there are no more keep-alive requests? I can't see a
>> mechanism in the BIO connector to ensure any kind of fairness with
>> respect to request priority: once the client is in, it can make as
>> many requests as it wants (up to maxKeepAliveRequests) without
>> getting back in line.
> 
> Correct. Although keep in mind that for BIO it doesn't make sense to
> have connections > threads so it really comes down to how the threads
> are scheduled for processing.

Understood, but there are say 1000 connections waiting in the accept
queue and only 250 threads available, if my connection gets accept()ed,
then I get to make as many requests as I want without having to get back
in line. Yes, I ave to compete for CPU time with the other 249 threads,
but I don't have to wait in the 1000-connection-long line.

Thanks,
-chris

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to