Hi,

I have implemented a variant of Jetty QoSFilter which provides reservation
capability for different priority levels (or request types). Please take a
look https://github.com/hgadre/servletrequest-scheduler

Regards
Hrishikesh

On Thu, May 21, 2015 at 5:44 PM, Hrishikesh Gadre <[email protected]>
wrote:

> Hi Joakim,
>
> Thanks a lot for the feedback.
>
> >>In real world practice though, this sort of tweaking of threading
> tweaking is a form of premature optimization.
> Are you sure this can happen?
>
> Yes. We have seen such deadlocks in real-life. The application in this
> context is the distributed version of Apache Solr search engine.
> https://wiki.apache.org/solr/DistributedSearch
>
> >>If it has happened, were you using the recommended dynamic sizing thread
> pool configuration? (like the default QueuedThreadPool does) Fixed size
> ThreadPool/Executors are not a good choice in your situation.
>
> I am not sure if dynamic sizing can entirely solve this problem. It has
> the same problem as the QoSFilter in the sense that external requests could
> potentially occupy the entire thread-pool thereby blocking the internal
> requests to complete. Thoughts?
>
> At this point I think there are couple of alternatives,
>
> (a) Use separate Connector instances (each backed by a dedicated
> thread-pool) for internal & external requests.
> (b) Implement a Filter similar to QoSFilter - which would provide the
> reservation capability for different priority levels.
>
> Thanks again :)
> -Hrishikesh
>
> --
> Joakim Erdfelt <joakim@xxxxxxxxxxx>
> webtide.com <http://www.webtide.com/>Â-Âeclipse.org/jettyÂ-Âcometd.org
> Expert advice, services and support fromÂfrom the Jetty & CometD experts
>
> On Thu, May 21, 2015 at 4:44 PM, Hrishikesh Gadre <[email protected]>
> wrote:
>
>> Hi Joakim,
>>
>> Thanks for the feedback. I think QoSFilter is an interesting alternative
>> but I think it does not quite fit the requirement. Let me elaborate my
>> use-case for better understanding.
>>
>> Every external request typically spawns multiple internal requests for
>> multiple servers (e.g. for scatter/gather queries). Hence it is important
>> to reserve a certain % of thread-pool capacity for internal requests.
>> Without this reservation, it is quite possible that the thread-pool is
>> consumed entirely by external requests - thereby creating a distributed
>> deadlock (since the external request depends upon the result of internal
>> request which can not be processed due to unavailability of threads).
>>
>> After reading the docs, I am not sure if QosFilter will be able to
>> provide this reservation capability out-of-box. i.e. it ensures to pick a
>> higher priority request from the backlog but doesn't prevent the system to
>> occupy the entire thread-pool with low-priority requests.
>>
>> Is my understanding correct? Also can you please elaborate the threading
>> model used with the jetty continuation?
>>
>> >>Or use the QoSFilter to setup a higher priority for internal vs
>> external requests. :-)
>>
>> --
>> Joakim Erdfelt <joakim@xxxxxxxxxxx>
>> webtide.com <http://www.webtide.com/>Â-Âeclipse.org/jettyÂ-Âcometd.org
>> Expert advice, services and support fromÂfrom the Jetty & CometD experts
>>
>> On Thu, May 21, 2015 at 3:10 PM, Hrishikesh Gadre <[email protected]>
>> wrote:
>>
>>> Hi,
>>>
>>> I am currently working on an application which comprises of multiple
>>> servers. Each server is deployed using a dedicated Jetty instance. Each
>>> server accepts HTTP requests from external clients as well as other servers
>>> in the system. i.e. the servers form the peer-to-peer system using HTTP
>>> protocol.
>>>
>>> I am currently working on a feature to separate out internal vs external
>>> requests such that a % of worker threads are reserved for internal requests
>>> and remaining threads for external ones.
>>>
>>> I could think of couple of approaches to solve this problem. Can you
>>> please take a look and provide feedback?
>>>
>>>
>>> (1) Using Servlet 3 specification
>>>
>>> The idea would be to define two separate thread-pools internally and
>>> submit the asynchronous request based on request type. My understanding
>>> here is that it would require an additional thread switch as compared to
>>> synchronous request processing (jetty_acceptor -> jetty_selector ->
>>> jetty_worker -> app_thread).  Is this accurate? If yes is there a way to
>>> avoid this?
>>>
>>> (2) Somehow customizing the Jetty implementation such that we reserve a
>>> % of jetty worker threads for internal requests and other for external
>>> requests. The flow would look like this,
>>>
>>>   jetty_acceptor -> jetty_selector (Demux) ---->
>>> jetty_worker_pool_for_internal
>>>                                                                    |
>>>
>>>  -----> jetty_worker_pool_for_external
>>>
>>> The demux here would look at the HTTP request to figure out its type and
>>> submit it to appropriate thread-pool. Is this possible? If yes, any
>>> pointers?
>>>
>>> Any other approach I may have missed?
>>>
>>> Thanks in advance,
>>> -Hrishikesh
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>
_______________________________________________
jetty-users mailing list
[email protected]
To change your delivery options, retrieve your password, or unsubscribe from 
this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

Reply via email to