Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-15 Thread Afkham Azeez
Our initial assumptions were wrong and we were able to narrow down this
issue to a wrong configuration parameter name. Due to that, we were always
running with a single disruptor event handler thread instead of a thread
pool. Now we are seeing better perf values with disruptor as opposed to the
Netty executor thread pool, with some tuning of disruptor configuration
parameters.

Azeez

On Mon, Mar 14, 2016 at 1:55 PM, Sagara Gunathunga  wrote:

>
>
> On Mon, Mar 14, 2016 at 1:53 PM, Srinath Perera  wrote:
>
>> I talked to Ranawaka and Isuru in detail.
>>
>> Disrupter helps a lot when tasks are CPU bound. In such cases, in can
>> works with very few threads and reduce the overhead of communication
>> between threads.
>>
>> However, when threads  block for I/O this advantage is reduced a lot. In
>> those cases, we need to have multiple disrupter workers ( batch
>> processors). We are doing that.
>>
>> However, doing  test with 500ms sleep is not fair IMO. Sleep often waits
>> more than the given value. With 200 threads, it can only do 400TPS with
>> 500ms sleep. I think we should compare against a DB backend.
>>
>> Shell we test disrupter and java work pool model against a DB backend?
>>
>
> +1 I also think we should use more realistic backend such as DB.
>
> Thanks !
>
>>
>> --Srinath
>>
>>
>>
>> On Mon, Mar 14, 2016 at 10:26 AM, Kasun Indrasiri  wrote:
>>
>>> Hi,
>>>
>>> Let me try to clarify few things here.
>>>
>>> - Initially we implemented Netty HTTP transport with conventional thread
>>> model (workers) and at the same time we also tested the Disruptor based
>>> model for the same Gateway/Header based routing use case. Disruptor based
>>> approach gave us around  ~20k of perf gain on perf testing environments.
>>> - MSF4J 1.0 GA didn't use GW's HTTP transport code as it is. It reused
>>> basic transport runtime but with a custom Netty handler that is used to
>>> dispatch the requests. So, MSFJ 1.0 GA, didn't have disruptor and
>>> performance/latency of MSF4J 1.0 GA has nothing to do with Disruprtor.
>>>
>>> - Now we are trying to migrate MSF4J into the exact same transport code
>>> as it with the GW core (carbon-transport's HTTP transport). And that's
>>> where we came across the above perf issue.
>>> - So, unlike GW scenario, for MSF4J and even for any other content-aware
>>> ESB scenario, the above approach is not the optimum it seems. Hence we are
>>> now looking into how such scenarios are handled with Disruptor.
>>>
>>> In that context, if we look at the original LMAX use case[1] is also
>>> quite similar to what we are trying with content aware scenarios. In their
>>> use case they had heavy CPU intensive components(such as Business Logic
>>> component) as well as IO bound components (such as Un-marshaller,
>>> Replicator). And they get better performance for the same use case with
>>> Disruptor over a conventional worker-thread model.
>>>
>>>
>>> [image: Inline image 1]
>>>
>>> So, we need to have a close look into that how we can implement a
>>> dependent consumer scenario[2] (one consumer is IO bound and the other is
>>> CPU bound) and check whether we can get more perf gain compared to the
>>> conventional worker thread model.
>>>
>>> Ranawaka is current looking into this.
>>>
>>> [1] http://martinfowler.com/articles/lmax.html
>>> [2]
>>> http://mechanitis.blogspot.com/2011/07/dissecting-disruptor-wiring-up.html
>>>
>>> On Sun, Mar 13, 2016 at 8:11 AM, Isuru Ranawaka  wrote:
>>>
 Hi Azeez,

 In GW Disruptor threads are not used for make calls for backends.
 Backends are called by Netty worker pool and those calls are non blocking
 calls. So if backend responds after a delay it won't be a problem.


 In MSF4J  if it includes IO operations or delayed operations then it
 causes problems because processing happens through Disruptor threads and
 by occupying all the limited disruptor threads. But this should be solved
 by operating Disruptor Event Handlers through workerpool and now we are
 looking in to that why it does not provide expected results.

 thanks
 IsuruR

 On Sat, Mar 12, 2016 at 6:46 PM, Afkham Azeez  wrote:

> Kasun et al,
> In MSF4J, the threads from the disruptor pool itself are used for
> processing in the MSF4J operation. In the case of the GW passthrough & HBR
> scenarios, are those disruptor threads used to make the calls to the 
> actual
> backends? Is that a blocking call? What if the backend service is changed
> so that it responds after a delay rather than instantaneously?
>
> On Sat, Mar 12, 2016 at 6:21 PM, Afkham Azeez  wrote:
>
>>
>>
>> On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana <
>> sanj...@wso2.com> wrote:
>>
>>> On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga 
>>> wrote:
>>>
 On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez 
 wrote:

> No from day 1, we have decided that GW & MSF4J

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-14 Thread Sagara Gunathunga
On Mon, Mar 14, 2016 at 1:53 PM, Srinath Perera  wrote:

> I talked to Ranawaka and Isuru in detail.
>
> Disrupter helps a lot when tasks are CPU bound. In such cases, in can
> works with very few threads and reduce the overhead of communication
> between threads.
>
> However, when threads  block for I/O this advantage is reduced a lot. In
> those cases, we need to have multiple disrupter workers ( batch
> processors). We are doing that.
>
> However, doing  test with 500ms sleep is not fair IMO. Sleep often waits
> more than the given value. With 200 threads, it can only do 400TPS with
> 500ms sleep. I think we should compare against a DB backend.
>
> Shell we test disrupter and java work pool model against a DB backend?
>

+1 I also think we should use more realistic backend such as DB.

Thanks !

>
> --Srinath
>
>
>
> On Mon, Mar 14, 2016 at 10:26 AM, Kasun Indrasiri  wrote:
>
>> Hi,
>>
>> Let me try to clarify few things here.
>>
>> - Initially we implemented Netty HTTP transport with conventional thread
>> model (workers) and at the same time we also tested the Disruptor based
>> model for the same Gateway/Header based routing use case. Disruptor based
>> approach gave us around  ~20k of perf gain on perf testing environments.
>> - MSF4J 1.0 GA didn't use GW's HTTP transport code as it is. It reused
>> basic transport runtime but with a custom Netty handler that is used to
>> dispatch the requests. So, MSFJ 1.0 GA, didn't have disruptor and
>> performance/latency of MSF4J 1.0 GA has nothing to do with Disruprtor.
>>
>> - Now we are trying to migrate MSF4J into the exact same transport code
>> as it with the GW core (carbon-transport's HTTP transport). And that's
>> where we came across the above perf issue.
>> - So, unlike GW scenario, for MSF4J and even for any other content-aware
>> ESB scenario, the above approach is not the optimum it seems. Hence we are
>> now looking into how such scenarios are handled with Disruptor.
>>
>> In that context, if we look at the original LMAX use case[1] is also
>> quite similar to what we are trying with content aware scenarios. In their
>> use case they had heavy CPU intensive components(such as Business Logic
>> component) as well as IO bound components (such as Un-marshaller,
>> Replicator). And they get better performance for the same use case with
>> Disruptor over a conventional worker-thread model.
>>
>>
>> [image: Inline image 1]
>>
>> So, we need to have a close look into that how we can implement a
>> dependent consumer scenario[2] (one consumer is IO bound and the other is
>> CPU bound) and check whether we can get more perf gain compared to the
>> conventional worker thread model.
>>
>> Ranawaka is current looking into this.
>>
>> [1] http://martinfowler.com/articles/lmax.html
>> [2]
>> http://mechanitis.blogspot.com/2011/07/dissecting-disruptor-wiring-up.html
>>
>> On Sun, Mar 13, 2016 at 8:11 AM, Isuru Ranawaka  wrote:
>>
>>> Hi Azeez,
>>>
>>> In GW Disruptor threads are not used for make calls for backends.
>>> Backends are called by Netty worker pool and those calls are non blocking
>>> calls. So if backend responds after a delay it won't be a problem.
>>>
>>>
>>> In MSF4J  if it includes IO operations or delayed operations then it
>>> causes problems because processing happens through Disruptor threads and
>>> by occupying all the limited disruptor threads. But this should be solved
>>> by operating Disruptor Event Handlers through workerpool and now we are
>>> looking in to that why it does not provide expected results.
>>>
>>> thanks
>>> IsuruR
>>>
>>> On Sat, Mar 12, 2016 at 6:46 PM, Afkham Azeez  wrote:
>>>
 Kasun et al,
 In MSF4J, the threads from the disruptor pool itself are used for
 processing in the MSF4J operation. In the case of the GW passthrough & HBR
 scenarios, are those disruptor threads used to make the calls to the actual
 backends? Is that a blocking call? What if the backend service is changed
 so that it responds after a delay rather than instantaneously?

 On Sat, Mar 12, 2016 at 6:21 PM, Afkham Azeez  wrote:

>
>
> On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana  > wrote:
>
>> On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga 
>> wrote:
>>
>>> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez 
>>> wrote:
>>>
 No from day 1, we have decided that GW & MSF4J will use the same
 Netty transport component so that the config file will be the same as 
 well
 as improvements made to that transport will be automatically available 
 for
 both products. So now at least for MSF4J, we have issues in using the 
 Netty
 transport in its current state, so we have to fix those issues.

>>>
>>> Reuse of same config files and components provide an advantage to us
>>> as the F/W developers/maintainers but my question was what are the 
>>> benefits
>>> grant to end users of M

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-14 Thread Srinath Perera
I talked to Ranawaka and Isuru in detail.

Disrupter helps a lot when tasks are CPU bound. In such cases, in can works
with very few threads and reduce the overhead of communication between
threads.

However, when threads  block for I/O this advantage is reduced a lot. In
those cases, we need to have multiple disrupter workers ( batch
processors). We are doing that.

However, doing  test with 500ms sleep is not fair IMO. Sleep often waits
more than the given value. With 200 threads, it can only do 400TPS with
500ms sleep. I think we should compare against a DB backend.

Shell we test disrupter and java work pool model against a DB backend?

--Srinath



On Mon, Mar 14, 2016 at 10:26 AM, Kasun Indrasiri  wrote:

> Hi,
>
> Let me try to clarify few things here.
>
> - Initially we implemented Netty HTTP transport with conventional thread
> model (workers) and at the same time we also tested the Disruptor based
> model for the same Gateway/Header based routing use case. Disruptor based
> approach gave us around  ~20k of perf gain on perf testing environments.
> - MSF4J 1.0 GA didn't use GW's HTTP transport code as it is. It reused
> basic transport runtime but with a custom Netty handler that is used to
> dispatch the requests. So, MSFJ 1.0 GA, didn't have disruptor and
> performance/latency of MSF4J 1.0 GA has nothing to do with Disruprtor.
>
> - Now we are trying to migrate MSF4J into the exact same transport code as
> it with the GW core (carbon-transport's HTTP transport). And that's where
> we came across the above perf issue.
> - So, unlike GW scenario, for MSF4J and even for any other content-aware
> ESB scenario, the above approach is not the optimum it seems. Hence we are
> now looking into how such scenarios are handled with Disruptor.
>
> In that context, if we look at the original LMAX use case[1] is also quite
> similar to what we are trying with content aware scenarios. In their use
> case they had heavy CPU intensive components(such as Business Logic
> component) as well as IO bound components (such as Un-marshaller,
> Replicator). And they get better performance for the same use case with
> Disruptor over a conventional worker-thread model.
>
>
> [image: Inline image 1]
>
> So, we need to have a close look into that how we can implement a
> dependent consumer scenario[2] (one consumer is IO bound and the other is
> CPU bound) and check whether we can get more perf gain compared to the
> conventional worker thread model.
>
> Ranawaka is current looking into this.
>
> [1] http://martinfowler.com/articles/lmax.html
> [2]
> http://mechanitis.blogspot.com/2011/07/dissecting-disruptor-wiring-up.html
>
> On Sun, Mar 13, 2016 at 8:11 AM, Isuru Ranawaka  wrote:
>
>> Hi Azeez,
>>
>> In GW Disruptor threads are not used for make calls for backends.
>> Backends are called by Netty worker pool and those calls are non blocking
>> calls. So if backend responds after a delay it won't be a problem.
>>
>>
>> In MSF4J  if it includes IO operations or delayed operations then it
>> causes problems because processing happens through Disruptor threads and
>> by occupying all the limited disruptor threads. But this should be solved
>> by operating Disruptor Event Handlers through workerpool and now we are
>> looking in to that why it does not provide expected results.
>>
>> thanks
>> IsuruR
>>
>> On Sat, Mar 12, 2016 at 6:46 PM, Afkham Azeez  wrote:
>>
>>> Kasun et al,
>>> In MSF4J, the threads from the disruptor pool itself are used for
>>> processing in the MSF4J operation. In the case of the GW passthrough & HBR
>>> scenarios, are those disruptor threads used to make the calls to the actual
>>> backends? Is that a blocking call? What if the backend service is changed
>>> so that it responds after a delay rather than instantaneously?
>>>
>>> On Sat, Mar 12, 2016 at 6:21 PM, Afkham Azeez  wrote:
>>>


 On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana 
 wrote:

> On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga 
> wrote:
>
>> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez 
>> wrote:
>>
>>> No from day 1, we have decided that GW & MSF4J will use the same
>>> Netty transport component so that the config file will be the same as 
>>> well
>>> as improvements made to that transport will be automatically available 
>>> for
>>> both products. So now at least for MSF4J, we have issues in using the 
>>> Netty
>>> transport in its current state, so we have to fix those issues.
>>>
>>
>> Reuse of same config files and components provide an advantage to us
>> as the F/W developers/maintainers but my question was what are the 
>> benefits
>> grant to end users of MSF4J through Carbon transport ?
>>
>
> We are writing MSF4J and the rest of the platform. Not someone else.
> As such we have to keep them consistent.
>
> For end users our target has to be to give the best performance
> possible.
>
>

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-13 Thread Kasun Indrasiri
Hi,

Let me try to clarify few things here.

- Initially we implemented Netty HTTP transport with conventional thread
model (workers) and at the same time we also tested the Disruptor based
model for the same Gateway/Header based routing use case. Disruptor based
approach gave us around  ~20k of perf gain on perf testing environments.
- MSF4J 1.0 GA didn't use GW's HTTP transport code as it is. It reused
basic transport runtime but with a custom Netty handler that is used to
dispatch the requests. So, MSFJ 1.0 GA, didn't have disruptor and
performance/latency of MSF4J 1.0 GA has nothing to do with Disruprtor.

- Now we are trying to migrate MSF4J into the exact same transport code as
it with the GW core (carbon-transport's HTTP transport). And that's where
we came across the above perf issue.
- So, unlike GW scenario, for MSF4J and even for any other content-aware
ESB scenario, the above approach is not the optimum it seems. Hence we are
now looking into how such scenarios are handled with Disruptor.

In that context, if we look at the original LMAX use case[1] is also quite
similar to what we are trying with content aware scenarios. In their use
case they had heavy CPU intensive components(such as Business Logic
component) as well as IO bound components (such as Un-marshaller,
Replicator). And they get better performance for the same use case with
Disruptor over a conventional worker-thread model.


[image: Inline image 1]

So, we need to have a close look into that how we can implement a dependent
consumer scenario[2] (one consumer is IO bound and the other is CPU bound)
and check whether we can get more perf gain compared to the conventional
worker thread model.

Ranawaka is current looking into this.

[1] http://martinfowler.com/articles/lmax.html
[2]
http://mechanitis.blogspot.com/2011/07/dissecting-disruptor-wiring-up.html

On Sun, Mar 13, 2016 at 8:11 AM, Isuru Ranawaka  wrote:

> Hi Azeez,
>
> In GW Disruptor threads are not used for make calls for backends. Backends
> are called by Netty worker pool and those calls are non blocking calls. So
> if backend responds after a delay it won't be a problem.
>
>
> In MSF4J  if it includes IO operations or delayed operations then it
> causes problems because processing happens through Disruptor threads and
> by occupying all the limited disruptor threads. But this should be solved
> by operating Disruptor Event Handlers through workerpool and now we are
> looking in to that why it does not provide expected results.
>
> thanks
> IsuruR
>
> On Sat, Mar 12, 2016 at 6:46 PM, Afkham Azeez  wrote:
>
>> Kasun et al,
>> In MSF4J, the threads from the disruptor pool itself are used for
>> processing in the MSF4J operation. In the case of the GW passthrough & HBR
>> scenarios, are those disruptor threads used to make the calls to the actual
>> backends? Is that a blocking call? What if the backend service is changed
>> so that it responds after a delay rather than instantaneously?
>>
>> On Sat, Mar 12, 2016 at 6:21 PM, Afkham Azeez  wrote:
>>
>>>
>>>
>>> On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana 
>>> wrote:
>>>
 On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga 
 wrote:

> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez  wrote:
>
>> No from day 1, we have decided that GW & MSF4J will use the same
>> Netty transport component so that the config file will be the same as 
>> well
>> as improvements made to that transport will be automatically available 
>> for
>> both products. So now at least for MSF4J, we have issues in using the 
>> Netty
>> transport in its current state, so we have to fix those issues.
>>
>
> Reuse of same config files and components provide an advantage to us
> as the F/W developers/maintainers but my question was what are the 
> benefits
> grant to end users of MSF4J through Carbon transport ?
>

 We are writing MSF4J and the rest of the platform. Not someone else. As
 such we have to keep them consistent.

 For end users our target has to be to give the best performance
 possible.


>   I don't think we can compromise performance numbers for a reason
> that is more important for F/W maintainers than end users, IMHO if we
> continue to use Carbon transport at least it should perform as same level
> as vanilla Netty.
>

 There's no reason why that cannot be the case.

 Can't we keep Disruptor while improve performance of Carbon transport
> ?
>

 Disruptor is a technique to make things more performant not less
 performant. We have to figure out what's wrong and fix it - not throw the
 baby out with the bathwater.

>>>
>>> Yes, we are in the process of trying to figure out why disruptor as
>>> opposed to the Netty executor threadpool gives better performance for the
>>> gateway (dispatching to a zero delay backend), while for an MSF4J service
>>> which has a sleep i

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-12 Thread Isuru Ranawaka
Hi Azeez,

In GW Disruptor threads are not used for make calls for backends. Backends
are called by Netty worker pool and those calls are non blocking calls. So
if backend responds after a delay it won't be a problem.


In MSF4J  if it includes IO operations or delayed operations then it causes
problems because processing happens through Disruptor threads and  by
occupying all the limited disruptor threads. But this should be solved by
operating Disruptor Event Handlers through workerpool and now we are
looking in to that why it does not provide expected results.

thanks
IsuruR

On Sat, Mar 12, 2016 at 6:46 PM, Afkham Azeez  wrote:

> Kasun et al,
> In MSF4J, the threads from the disruptor pool itself are used for
> processing in the MSF4J operation. In the case of the GW passthrough & HBR
> scenarios, are those disruptor threads used to make the calls to the actual
> backends? Is that a blocking call? What if the backend service is changed
> so that it responds after a delay rather than instantaneously?
>
> On Sat, Mar 12, 2016 at 6:21 PM, Afkham Azeez  wrote:
>
>>
>>
>> On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana 
>> wrote:
>>
>>> On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga 
>>> wrote:
>>>
 On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez  wrote:

> No from day 1, we have decided that GW & MSF4J will use the same Netty
> transport component so that the config file will be the same as well as
> improvements made to that transport will be automatically available for
> both products. So now at least for MSF4J, we have issues in using the 
> Netty
> transport in its current state, so we have to fix those issues.
>

 Reuse of same config files and components provide an advantage to us as
 the F/W developers/maintainers but my question was what are the benefits
 grant to end users of MSF4J through Carbon transport ?

>>>
>>> We are writing MSF4J and the rest of the platform. Not someone else. As
>>> such we have to keep them consistent.
>>>
>>> For end users our target has to be to give the best performance possible.
>>>
>>>
   I don't think we can compromise performance numbers for a reason that
 is more important for F/W maintainers than end users, IMHO if we continue
 to use Carbon transport at least it should perform as same level as vanilla
 Netty.

>>>
>>> There's no reason why that cannot be the case.
>>>
>>> Can't we keep Disruptor while improve performance of Carbon transport ?

>>>
>>> Disruptor is a technique to make things more performant not less
>>> performant. We have to figure out what's wrong and fix it - not throw the
>>> baby out with the bathwater.
>>>
>>
>> Yes, we are in the process of trying to figure out why disruptor as
>> opposed to the Netty executor threadpool gives better performance for the
>> gateway (dispatching to a zero delay backend), while for an MSF4J service
>> which has a sleep in it, it is the other way around.
>>
>>
>>>
>>> Sanjiva.
>>> --
>>> Sanjiva Weerawarana, Ph.D.
>>> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
>>> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
>>> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
>>> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
>>> Lean . Enterprise . Middleware
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> *Afkham Azeez*
>> Director of Architecture; WSO2, Inc.; http://wso2.com
>> Member; Apache Software Foundation; http://www.apache.org/
>> * *
>> *email: **az...@wso2.com* 
>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>> *http://blog.afkham.org* 
>> *twitter: **http://twitter.com/afkham_azeez*
>> 
>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>> *
>>
>> *Lean . Enterprise . Middleware*
>>
>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> 
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>



-- 
Best Regards
Isuru Ranawaka
M: +94714629880
Blog : http://isurur.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-12 Thread Afkham Azeez
Kasun et al,
In MSF4J, the threads from the disruptor pool itself are used for
processing in the MSF4J operation. In the case of the GW passthrough & HBR
scenarios, are those disruptor threads used to make the calls to the actual
backends? Is that a blocking call? What if the backend service is changed
so that it responds after a delay rather than instantaneously?

On Sat, Mar 12, 2016 at 6:21 PM, Afkham Azeez  wrote:

>
>
> On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana 
> wrote:
>
>> On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga 
>> wrote:
>>
>>> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez  wrote:
>>>
 No from day 1, we have decided that GW & MSF4J will use the same Netty
 transport component so that the config file will be the same as well as
 improvements made to that transport will be automatically available for
 both products. So now at least for MSF4J, we have issues in using the Netty
 transport in its current state, so we have to fix those issues.

>>>
>>> Reuse of same config files and components provide an advantage to us as
>>> the F/W developers/maintainers but my question was what are the benefits
>>> grant to end users of MSF4J through Carbon transport ?
>>>
>>
>> We are writing MSF4J and the rest of the platform. Not someone else. As
>> such we have to keep them consistent.
>>
>> For end users our target has to be to give the best performance possible.
>>
>>
>>>   I don't think we can compromise performance numbers for a reason that
>>> is more important for F/W maintainers than end users, IMHO if we continue
>>> to use Carbon transport at least it should perform as same level as vanilla
>>> Netty.
>>>
>>
>> There's no reason why that cannot be the case.
>>
>> Can't we keep Disruptor while improve performance of Carbon transport ?
>>>
>>
>> Disruptor is a technique to make things more performant not less
>> performant. We have to figure out what's wrong and fix it - not throw the
>> baby out with the bathwater.
>>
>
> Yes, we are in the process of trying to figure out why disruptor as
> opposed to the Netty executor threadpool gives better performance for the
> gateway (dispatching to a zero delay backend), while for an MSF4J service
> which has a sleep in it, it is the other way around.
>
>
>>
>> Sanjiva.
>> --
>> Sanjiva Weerawarana, Ph.D.
>> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
>> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
>> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
>> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
>> Lean . Enterprise . Middleware
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> 
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>



-- 
*Afkham Azeez*
Director of Architecture; WSO2, Inc.; http://wso2.com
Member; Apache Software Foundation; http://www.apache.org/
* *
*email: **az...@wso2.com* 
* cell: +94 77 3320919blog: **http://blog.afkham.org*

*twitter: **http://twitter.com/afkham_azeez*

*linked-in: **http://lk.linkedin.com/in/afkhamazeez
*

*Lean . Enterprise . Middleware*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-12 Thread Afkham Azeez
On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana 
wrote:

> On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga 
> wrote:
>
>> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez  wrote:
>>
>>> No from day 1, we have decided that GW & MSF4J will use the same Netty
>>> transport component so that the config file will be the same as well as
>>> improvements made to that transport will be automatically available for
>>> both products. So now at least for MSF4J, we have issues in using the Netty
>>> transport in its current state, so we have to fix those issues.
>>>
>>
>> Reuse of same config files and components provide an advantage to us as
>> the F/W developers/maintainers but my question was what are the benefits
>> grant to end users of MSF4J through Carbon transport ?
>>
>
> We are writing MSF4J and the rest of the platform. Not someone else. As
> such we have to keep them consistent.
>
> For end users our target has to be to give the best performance possible.
>
>
>>   I don't think we can compromise performance numbers for a reason that
>> is more important for F/W maintainers than end users, IMHO if we continue
>> to use Carbon transport at least it should perform as same level as vanilla
>> Netty.
>>
>
> There's no reason why that cannot be the case.
>
> Can't we keep Disruptor while improve performance of Carbon transport ?
>>
>
> Disruptor is a technique to make things more performant not less
> performant. We have to figure out what's wrong and fix it - not throw the
> baby out with the bathwater.
>

Yes, we are in the process of trying to figure out why disruptor as opposed
to the Netty executor threadpool gives better performance for the gateway
(dispatching to a zero delay backend), while for an MSF4J service which has
a sleep in it, it is the other way around.


>
> Sanjiva.
> --
> Sanjiva Weerawarana, Ph.D.
> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
> Lean . Enterprise . Middleware
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Afkham Azeez*
Director of Architecture; WSO2, Inc.; http://wso2.com
Member; Apache Software Foundation; http://www.apache.org/
* *
*email: **az...@wso2.com* 
* cell: +94 77 3320919blog: **http://blog.afkham.org*

*twitter: **http://twitter.com/afkham_azeez*

*linked-in: **http://lk.linkedin.com/in/afkhamazeez
*

*Lean . Enterprise . Middleware*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-12 Thread Sanjiva Weerawarana
On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga  wrote:

> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez  wrote:
>
>> No from day 1, we have decided that GW & MSF4J will use the same Netty
>> transport component so that the config file will be the same as well as
>> improvements made to that transport will be automatically available for
>> both products. So now at least for MSF4J, we have issues in using the Netty
>> transport in its current state, so we have to fix those issues.
>>
>
> Reuse of same config files and components provide an advantage to us as
> the F/W developers/maintainers but my question was what are the benefits
> grant to end users of MSF4J through Carbon transport ?
>

We are writing MSF4J and the rest of the platform. Not someone else. As
such we have to keep them consistent.

For end users our target has to be to give the best performance possible.


>   I don't think we can compromise performance numbers for a reason that is
> more important for F/W maintainers than end users, IMHO if we continue to
> use Carbon transport at least it should perform as same level as vanilla
> Netty.
>

There's no reason why that cannot be the case.

Can't we keep Disruptor while improve performance of Carbon transport ?
>

Disruptor is a technique to make things more performant not less
performant. We have to figure out what's wrong and fix it - not throw the
baby out with the bathwater.

Sanjiva.
-- 
Sanjiva Weerawarana, Ph.D.
Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-11 Thread Isuru Ranawaka
Hi,

we have made the Disruptor optional and Samiyuru will continue on the
testing as well.

thanks

On Fri, Mar 11, 2016 at 12:08 PM, Isuru Ranawaka  wrote:

> Hi Azeez,
>
> I am currently working on  making disruptor optional and I will finish it
> by EOD today.After that we will do the tests that kasun has mentioned and
> figured out the best thread model for MSF4j  to be used.
>
> thanks
>
> On Fri, Mar 11, 2016 at 12:00 PM, Afkham Azeez  wrote:
>
>> Shall we aim to get to the bottom of this by EoD today?
>>
>> On Fri, Mar 11, 2016 at 11:44 AM, Samiyuru Senarathne 
>> wrote:
>>
>>> Transport config used for the above JFR:
>>>  -
>>>   id: "jaxrs-http"
>>>   host: "0.0.0.0"
>>>   port: 8080
>>>   bossThreadPoolSize: 1
>>>   workerThreadPoolSize: 16
>>>   parameters:
>>> #   -
>>> #name: "execThreadPoolSize"
>>> #value: 2048
>>>-
>>> name: "disruptor.buffer.size"
>>> value: 1024
>>>-
>>> name: "disruptor.count"
>>> value: 5
>>>-
>>> name: "disruptor.eventhandler.count"
>>> value: 256
>>> #   -
>>> #name: "disruptor.wait.strategy"
>>> #value: "SLEEP_WAITING"
>>>-
>>> name: "share.disruptor.with.outbound"
>>> value: false
>>>-
>>> name: "disruptor.consumer.external.worker.pool.size"
>>> value: 256
>>>
>>>
>>> On Fri, Mar 11, 2016 at 11:41 AM, Samiyuru Senarathne >> > wrote:
>>>
 Please find the attached JFR dump of MSF4J-carbon-message-disruptor
 test performed with following 'ab' config.
 An MSF4J service that 'consumes a 1KB and sleeps for 50ms before
 responding the same 1KB' is used as the service.

 ab -k -p 1kb_rand_data.txt -s 999 -c 400 -n 5000 -H "Accept:text/plain"
 http://204.13.85.2:8080/EchoService/echo

 AB Results Summary:
 Concurrency Level:  400
 Time taken for tests:   30.224 seconds
 Complete requests:  3000
 Failed requests:0
 Keep-Alive requests:3000
 Total transferred:  3345000 bytes
 Total body sent:3609000
 HTML transferred:   3072000 bytes
 Requests per second:99.26 [#/sec] (mean)
 Time per request:   4029.882 [ms] (mean)
 Time per request:   10.075 [ms] (mean, across all concurrent
 requests)
 Transfer rate:  108.08 [Kbytes/sec] received
 116.61 kb/s sent
 224.69 kb/s total



 On Fri, Mar 11, 2016 at 10:39 AM, Samiyuru Senarathne <
 samiy...@wso2.com> wrote:

> Hi,
>
> Please find results of the tests I have done so far.
>
> https://docs.google.com/a/wso2.com/spreadsheets/d/16TXeXU022b5ILqkRsY3zZdnu2OiA3yWEN42xVL8eXa4/edit?usp=sharing
>
> MSF4J 1.0.0 section of this tests gives a good insight into the netty
> thread behaviour. I focused a bit more on that because confirming the
> practical behaviour of netty thread model is one of the first things we
> should do.
>
> According to these results,
>
>- Increasing the netty boss pool does not make any difference.
>- For the netty worker pool, it's enough to have a number of
>threads equal to the number of cpus.
>- Increasing the number of threads of the pool of the netty
>handler that does the actual work increases the performance 
> significantly
>for high concurrency levels.
>
> Regarding the tests that include disruptor,
> I applied the optimal configurations according to [1]. But I could not
> get results even close to the results of GW and the results I got are bit
> strange (99 TPS regardless of concurrency level). I think we should try
> more variations of disruptor settings before coming to a conclusion.
>
> [1] -
> https://docs.google.com/a/wso2.com/spreadsheets/d/1ck2O7eMkswJSQCgW8ciOkIyZkXGiZhIxRWNN-R9vgMk/edit?usp=sharing
>
> Best Regards,
> Samiyuru
>
> On Fri, Mar 11, 2016 at 10:06 AM, Afkham Azeez  wrote:
>
>> I think Samiyuru tested with that nrw worker pool & still the
>> performance is unacceptable.
>> On Mar 11, 2016 9:45 AM, "Isuru Ranawaka"  wrote:
>>
>>> Hi ,
>>>
>>> We have already added  native worker pool for Disruptor and Samiyuru
>>> is doing testing on that. We will make disruptor optional as well.
>>>
>>> thanks
>>>
>>> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri 
>>> wrote:
>>>
 Yes, we can make the disruptor optional. Also, we should try using
 the native worker pool for Event Handler[1], so that the Disruptor 
 itself
 runs the event handler on a worker pool. We'll implement both 
 approaches
 and do a comparison.

 [1]
 https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
 .)

 On Thu, Mar 10, 2016 at 

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-11 Thread Sagara Gunathunga
On Fri, Mar 11, 2016 at 12:08 PM, Isuru Ranawaka  wrote:

> Hi Azeez,
>
> I am currently working on  making disruptor optional and I will finish it
> by EOD today.After that we will do the tests that kasun has mentioned and
> figured out the best thread model for MSF4j  to be used.
>

Srinath mentioned that he also can help, please sync-up with him as well.

Thanks !

>
> thanks
>
> On Fri, Mar 11, 2016 at 12:00 PM, Afkham Azeez  wrote:
>
>> Shall we aim to get to the bottom of this by EoD today?
>>
>> On Fri, Mar 11, 2016 at 11:44 AM, Samiyuru Senarathne 
>> wrote:
>>
>>> Transport config used for the above JFR:
>>>  -
>>>   id: "jaxrs-http"
>>>   host: "0.0.0.0"
>>>   port: 8080
>>>   bossThreadPoolSize: 1
>>>   workerThreadPoolSize: 16
>>>   parameters:
>>> #   -
>>> #name: "execThreadPoolSize"
>>> #value: 2048
>>>-
>>> name: "disruptor.buffer.size"
>>> value: 1024
>>>-
>>> name: "disruptor.count"
>>> value: 5
>>>-
>>> name: "disruptor.eventhandler.count"
>>> value: 256
>>> #   -
>>> #name: "disruptor.wait.strategy"
>>> #value: "SLEEP_WAITING"
>>>-
>>> name: "share.disruptor.with.outbound"
>>> value: false
>>>-
>>> name: "disruptor.consumer.external.worker.pool.size"
>>> value: 256
>>>
>>>
>>> On Fri, Mar 11, 2016 at 11:41 AM, Samiyuru Senarathne >> > wrote:
>>>
 Please find the attached JFR dump of MSF4J-carbon-message-disruptor
 test performed with following 'ab' config.
 An MSF4J service that 'consumes a 1KB and sleeps for 50ms before
 responding the same 1KB' is used as the service.

 ab -k -p 1kb_rand_data.txt -s 999 -c 400 -n 5000 -H "Accept:text/plain"
 http://204.13.85.2:8080/EchoService/echo

 AB Results Summary:
 Concurrency Level:  400
 Time taken for tests:   30.224 seconds
 Complete requests:  3000
 Failed requests:0
 Keep-Alive requests:3000
 Total transferred:  3345000 bytes
 Total body sent:3609000
 HTML transferred:   3072000 bytes
 Requests per second:99.26 [#/sec] (mean)
 Time per request:   4029.882 [ms] (mean)
 Time per request:   10.075 [ms] (mean, across all concurrent
 requests)
 Transfer rate:  108.08 [Kbytes/sec] received
 116.61 kb/s sent
 224.69 kb/s total



 On Fri, Mar 11, 2016 at 10:39 AM, Samiyuru Senarathne <
 samiy...@wso2.com> wrote:

> Hi,
>
> Please find results of the tests I have done so far.
>
> https://docs.google.com/a/wso2.com/spreadsheets/d/16TXeXU022b5ILqkRsY3zZdnu2OiA3yWEN42xVL8eXa4/edit?usp=sharing
>
> MSF4J 1.0.0 section of this tests gives a good insight into the netty
> thread behaviour. I focused a bit more on that because confirming the
> practical behaviour of netty thread model is one of the first things we
> should do.
>
> According to these results,
>
>- Increasing the netty boss pool does not make any difference.
>- For the netty worker pool, it's enough to have a number of
>threads equal to the number of cpus.
>- Increasing the number of threads of the pool of the netty
>handler that does the actual work increases the performance 
> significantly
>for high concurrency levels.
>
> Regarding the tests that include disruptor,
> I applied the optimal configurations according to [1]. But I could not
> get results even close to the results of GW and the results I got are bit
> strange (99 TPS regardless of concurrency level). I think we should try
> more variations of disruptor settings before coming to a conclusion.
>
> [1] -
> https://docs.google.com/a/wso2.com/spreadsheets/d/1ck2O7eMkswJSQCgW8ciOkIyZkXGiZhIxRWNN-R9vgMk/edit?usp=sharing
>
> Best Regards,
> Samiyuru
>
> On Fri, Mar 11, 2016 at 10:06 AM, Afkham Azeez  wrote:
>
>> I think Samiyuru tested with that nrw worker pool & still the
>> performance is unacceptable.
>> On Mar 11, 2016 9:45 AM, "Isuru Ranawaka"  wrote:
>>
>>> Hi ,
>>>
>>> We have already added  native worker pool for Disruptor and Samiyuru
>>> is doing testing on that. We will make disruptor optional as well.
>>>
>>> thanks
>>>
>>> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri 
>>> wrote:
>>>
 Yes, we can make the disruptor optional. Also, we should try using
 the native worker pool for Event Handler[1], so that the Disruptor 
 itself
 runs the event handler on a worker pool. We'll implement both 
 approaches
 and do a comparison.

 [1]
 https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
 .)

 On Thu, Mar 10, 2016 at 8:48 AM, Afkh

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-10 Thread Isuru Ranawaka
Hi Azeez,

I am currently working on  making disruptor optional and I will finish it
by EOD today.After that we will do the tests that kasun has mentioned and
figured out the best thread model for MSF4j  to be used.

thanks

On Fri, Mar 11, 2016 at 12:00 PM, Afkham Azeez  wrote:

> Shall we aim to get to the bottom of this by EoD today?
>
> On Fri, Mar 11, 2016 at 11:44 AM, Samiyuru Senarathne 
> wrote:
>
>> Transport config used for the above JFR:
>>  -
>>   id: "jaxrs-http"
>>   host: "0.0.0.0"
>>   port: 8080
>>   bossThreadPoolSize: 1
>>   workerThreadPoolSize: 16
>>   parameters:
>> #   -
>> #name: "execThreadPoolSize"
>> #value: 2048
>>-
>> name: "disruptor.buffer.size"
>> value: 1024
>>-
>> name: "disruptor.count"
>> value: 5
>>-
>> name: "disruptor.eventhandler.count"
>> value: 256
>> #   -
>> #name: "disruptor.wait.strategy"
>> #value: "SLEEP_WAITING"
>>-
>> name: "share.disruptor.with.outbound"
>> value: false
>>-
>> name: "disruptor.consumer.external.worker.pool.size"
>> value: 256
>>
>>
>> On Fri, Mar 11, 2016 at 11:41 AM, Samiyuru Senarathne 
>> wrote:
>>
>>> Please find the attached JFR dump of MSF4J-carbon-message-disruptor test
>>> performed with following 'ab' config.
>>> An MSF4J service that 'consumes a 1KB and sleeps for 50ms before
>>> responding the same 1KB' is used as the service.
>>>
>>> ab -k -p 1kb_rand_data.txt -s 999 -c 400 -n 5000 -H "Accept:text/plain"
>>> http://204.13.85.2:8080/EchoService/echo
>>>
>>> AB Results Summary:
>>> Concurrency Level:  400
>>> Time taken for tests:   30.224 seconds
>>> Complete requests:  3000
>>> Failed requests:0
>>> Keep-Alive requests:3000
>>> Total transferred:  3345000 bytes
>>> Total body sent:3609000
>>> HTML transferred:   3072000 bytes
>>> Requests per second:99.26 [#/sec] (mean)
>>> Time per request:   4029.882 [ms] (mean)
>>> Time per request:   10.075 [ms] (mean, across all concurrent
>>> requests)
>>> Transfer rate:  108.08 [Kbytes/sec] received
>>> 116.61 kb/s sent
>>> 224.69 kb/s total
>>>
>>>
>>>
>>> On Fri, Mar 11, 2016 at 10:39 AM, Samiyuru Senarathne >> > wrote:
>>>
 Hi,

 Please find results of the tests I have done so far.

 https://docs.google.com/a/wso2.com/spreadsheets/d/16TXeXU022b5ILqkRsY3zZdnu2OiA3yWEN42xVL8eXa4/edit?usp=sharing

 MSF4J 1.0.0 section of this tests gives a good insight into the netty
 thread behaviour. I focused a bit more on that because confirming the
 practical behaviour of netty thread model is one of the first things we
 should do.

 According to these results,

- Increasing the netty boss pool does not make any difference.
- For the netty worker pool, it's enough to have a number of
threads equal to the number of cpus.
- Increasing the number of threads of the pool of the netty handler
that does the actual work increases the performance significantly for 
 high
concurrency levels.

 Regarding the tests that include disruptor,
 I applied the optimal configurations according to [1]. But I could not
 get results even close to the results of GW and the results I got are bit
 strange (99 TPS regardless of concurrency level). I think we should try
 more variations of disruptor settings before coming to a conclusion.

 [1] -
 https://docs.google.com/a/wso2.com/spreadsheets/d/1ck2O7eMkswJSQCgW8ciOkIyZkXGiZhIxRWNN-R9vgMk/edit?usp=sharing

 Best Regards,
 Samiyuru

 On Fri, Mar 11, 2016 at 10:06 AM, Afkham Azeez  wrote:

> I think Samiyuru tested with that nrw worker pool & still the
> performance is unacceptable.
> On Mar 11, 2016 9:45 AM, "Isuru Ranawaka"  wrote:
>
>> Hi ,
>>
>> We have already added  native worker pool for Disruptor and Samiyuru
>> is doing testing on that. We will make disruptor optional as well.
>>
>> thanks
>>
>> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri 
>> wrote:
>>
>>> Yes, we can make the disruptor optional. Also, we should try using
>>> the native worker pool for Event Handler[1], so that the Disruptor 
>>> itself
>>> runs the event handler on a worker pool. We'll implement both approaches
>>> and do a comparison.
>>>
>>> [1]
>>> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
>>> .)
>>>
>>> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez 
>>> wrote:
>>>
 After upgrading to the new transport, we are seeing a significant
 drop in performance for any service that take some time to execute. We 
 have
 tried with the configuration used for the gateway which gave the best
 figures on the same hardware. 

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-10 Thread Afkham Azeez
Shall we aim to get to the bottom of this by EoD today?

On Fri, Mar 11, 2016 at 11:44 AM, Samiyuru Senarathne 
wrote:

> Transport config used for the above JFR:
>  -
>   id: "jaxrs-http"
>   host: "0.0.0.0"
>   port: 8080
>   bossThreadPoolSize: 1
>   workerThreadPoolSize: 16
>   parameters:
> #   -
> #name: "execThreadPoolSize"
> #value: 2048
>-
> name: "disruptor.buffer.size"
> value: 1024
>-
> name: "disruptor.count"
> value: 5
>-
> name: "disruptor.eventhandler.count"
> value: 256
> #   -
> #name: "disruptor.wait.strategy"
> #value: "SLEEP_WAITING"
>-
> name: "share.disruptor.with.outbound"
> value: false
>-
> name: "disruptor.consumer.external.worker.pool.size"
> value: 256
>
>
> On Fri, Mar 11, 2016 at 11:41 AM, Samiyuru Senarathne 
> wrote:
>
>> Please find the attached JFR dump of MSF4J-carbon-message-disruptor test
>> performed with following 'ab' config.
>> An MSF4J service that 'consumes a 1KB and sleeps for 50ms before
>> responding the same 1KB' is used as the service.
>>
>> ab -k -p 1kb_rand_data.txt -s 999 -c 400 -n 5000 -H "Accept:text/plain"
>> http://204.13.85.2:8080/EchoService/echo
>>
>> AB Results Summary:
>> Concurrency Level:  400
>> Time taken for tests:   30.224 seconds
>> Complete requests:  3000
>> Failed requests:0
>> Keep-Alive requests:3000
>> Total transferred:  3345000 bytes
>> Total body sent:3609000
>> HTML transferred:   3072000 bytes
>> Requests per second:99.26 [#/sec] (mean)
>> Time per request:   4029.882 [ms] (mean)
>> Time per request:   10.075 [ms] (mean, across all concurrent requests)
>> Transfer rate:  108.08 [Kbytes/sec] received
>> 116.61 kb/s sent
>> 224.69 kb/s total
>>
>>
>>
>> On Fri, Mar 11, 2016 at 10:39 AM, Samiyuru Senarathne 
>> wrote:
>>
>>> Hi,
>>>
>>> Please find results of the tests I have done so far.
>>>
>>> https://docs.google.com/a/wso2.com/spreadsheets/d/16TXeXU022b5ILqkRsY3zZdnu2OiA3yWEN42xVL8eXa4/edit?usp=sharing
>>>
>>> MSF4J 1.0.0 section of this tests gives a good insight into the netty
>>> thread behaviour. I focused a bit more on that because confirming the
>>> practical behaviour of netty thread model is one of the first things we
>>> should do.
>>>
>>> According to these results,
>>>
>>>- Increasing the netty boss pool does not make any difference.
>>>- For the netty worker pool, it's enough to have a number of threads
>>>equal to the number of cpus.
>>>- Increasing the number of threads of the pool of the netty handler
>>>that does the actual work increases the performance significantly for 
>>> high
>>>concurrency levels.
>>>
>>> Regarding the tests that include disruptor,
>>> I applied the optimal configurations according to [1]. But I could not
>>> get results even close to the results of GW and the results I got are bit
>>> strange (99 TPS regardless of concurrency level). I think we should try
>>> more variations of disruptor settings before coming to a conclusion.
>>>
>>> [1] -
>>> https://docs.google.com/a/wso2.com/spreadsheets/d/1ck2O7eMkswJSQCgW8ciOkIyZkXGiZhIxRWNN-R9vgMk/edit?usp=sharing
>>>
>>> Best Regards,
>>> Samiyuru
>>>
>>> On Fri, Mar 11, 2016 at 10:06 AM, Afkham Azeez  wrote:
>>>
 I think Samiyuru tested with that nrw worker pool & still the
 performance is unacceptable.
 On Mar 11, 2016 9:45 AM, "Isuru Ranawaka"  wrote:

> Hi ,
>
> We have already added  native worker pool for Disruptor and Samiyuru
> is doing testing on that. We will make disruptor optional as well.
>
> thanks
>
> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri 
> wrote:
>
>> Yes, we can make the disruptor optional. Also, we should try using
>> the native worker pool for Event Handler[1], so that the Disruptor itself
>> runs the event handler on a worker pool. We'll implement both approaches
>> and do a comparison.
>>
>> [1]
>> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
>> .)
>>
>> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>>
>>> After upgrading to the new transport, we are seeing a significant
>>> drop in performance for any service that take some time to execute. We 
>>> have
>>> tried with the configuration used for the gateway which gave the best
>>> figures on the same hardware. We have also noted that using a separate
>>> dedicated executor thread pool, which is supported by Netty, gave much
>>> better performance than the disruptor based implementation. Even in 
>>> theory,
>>> disruptor cannot give better performance when used with a real service 
>>> that
>>> does some real work, rather than doing passthrough, for example. Can we
>>> improve the Netty transport to make going th

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-10 Thread Samiyuru Senarathne
Transport config used for the above JFR:
 -
  id: "jaxrs-http"
  host: "0.0.0.0"
  port: 8080
  bossThreadPoolSize: 1
  workerThreadPoolSize: 16
  parameters:
#   -
#name: "execThreadPoolSize"
#value: 2048
   -
name: "disruptor.buffer.size"
value: 1024
   -
name: "disruptor.count"
value: 5
   -
name: "disruptor.eventhandler.count"
value: 256
#   -
#name: "disruptor.wait.strategy"
#value: "SLEEP_WAITING"
   -
name: "share.disruptor.with.outbound"
value: false
   -
name: "disruptor.consumer.external.worker.pool.size"
value: 256


On Fri, Mar 11, 2016 at 11:41 AM, Samiyuru Senarathne 
wrote:

> Please find the attached JFR dump of MSF4J-carbon-message-disruptor test
> performed with following 'ab' config.
> An MSF4J service that 'consumes a 1KB and sleeps for 50ms before
> responding the same 1KB' is used as the service.
>
> ab -k -p 1kb_rand_data.txt -s 999 -c 400 -n 5000 -H "Accept:text/plain"
> http://204.13.85.2:8080/EchoService/echo
>
> AB Results Summary:
> Concurrency Level:  400
> Time taken for tests:   30.224 seconds
> Complete requests:  3000
> Failed requests:0
> Keep-Alive requests:3000
> Total transferred:  3345000 bytes
> Total body sent:3609000
> HTML transferred:   3072000 bytes
> Requests per second:99.26 [#/sec] (mean)
> Time per request:   4029.882 [ms] (mean)
> Time per request:   10.075 [ms] (mean, across all concurrent requests)
> Transfer rate:  108.08 [Kbytes/sec] received
> 116.61 kb/s sent
> 224.69 kb/s total
>
>
>
> On Fri, Mar 11, 2016 at 10:39 AM, Samiyuru Senarathne 
> wrote:
>
>> Hi,
>>
>> Please find results of the tests I have done so far.
>>
>> https://docs.google.com/a/wso2.com/spreadsheets/d/16TXeXU022b5ILqkRsY3zZdnu2OiA3yWEN42xVL8eXa4/edit?usp=sharing
>>
>> MSF4J 1.0.0 section of this tests gives a good insight into the netty
>> thread behaviour. I focused a bit more on that because confirming the
>> practical behaviour of netty thread model is one of the first things we
>> should do.
>>
>> According to these results,
>>
>>- Increasing the netty boss pool does not make any difference.
>>- For the netty worker pool, it's enough to have a number of threads
>>equal to the number of cpus.
>>- Increasing the number of threads of the pool of the netty handler
>>that does the actual work increases the performance significantly for high
>>concurrency levels.
>>
>> Regarding the tests that include disruptor,
>> I applied the optimal configurations according to [1]. But I could not
>> get results even close to the results of GW and the results I got are bit
>> strange (99 TPS regardless of concurrency level). I think we should try
>> more variations of disruptor settings before coming to a conclusion.
>>
>> [1] -
>> https://docs.google.com/a/wso2.com/spreadsheets/d/1ck2O7eMkswJSQCgW8ciOkIyZkXGiZhIxRWNN-R9vgMk/edit?usp=sharing
>>
>> Best Regards,
>> Samiyuru
>>
>> On Fri, Mar 11, 2016 at 10:06 AM, Afkham Azeez  wrote:
>>
>>> I think Samiyuru tested with that nrw worker pool & still the
>>> performance is unacceptable.
>>> On Mar 11, 2016 9:45 AM, "Isuru Ranawaka"  wrote:
>>>
 Hi ,

 We have already added  native worker pool for Disruptor and Samiyuru is
 doing testing on that. We will make disruptor optional as well.

 thanks

 On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri 
 wrote:

> Yes, we can make the disruptor optional. Also, we should try using the
> native worker pool for Event Handler[1], so that the Disruptor itself runs
> the event handler on a worker pool. We'll implement both approaches and do
> a comparison.
>
> [1]
> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
> .)
>
> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>
>> After upgrading to the new transport, we are seeing a significant
>> drop in performance for any service that take some time to execute. We 
>> have
>> tried with the configuration used for the gateway which gave the best
>> figures on the same hardware. We have also noted that using a separate
>> dedicated executor thread pool, which is supported by Netty, gave much
>> better performance than the disruptor based implementation. Even in 
>> theory,
>> disruptor cannot give better performance when used with a real service 
>> that
>> does some real work, rather than doing passthrough, for example. Can we
>> improve the Netty transport to make going through disruptor optional?
>>
>> --
>> *Afkham Azeez*
>> Director of Architecture; WSO2, Inc.; http://wso2.com
>> Member; Apache Software Foundation; http://www.apache.org/
>> * *
>> *email: **az...@wso2.com* 
>> * cell: +94 77 3320919

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-10 Thread Samiyuru Senarathne
Hi,

Please find results of the tests I have done so far.
https://docs.google.com/a/wso2.com/spreadsheets/d/16TXeXU022b5ILqkRsY3zZdnu2OiA3yWEN42xVL8eXa4/edit?usp=sharing

MSF4J 1.0.0 section of this tests gives a good insight into the netty
thread behaviour. I focused a bit more on that because confirming the
practical behaviour of netty thread model is one of the first things we
should do.

According to these results,

   - Increasing the netty boss pool does not make any difference.
   - For the netty worker pool, it's enough to have a number of threads
   equal to the number of cpus.
   - Increasing the number of threads of the pool of the netty handler that
   does the actual work increases the performance significantly for high
   concurrency levels.

Regarding the tests that include disruptor,
I applied the optimal configurations according to [1]. But I could not get
results even close to the results of GW and the results I got are bit
strange (99 TPS regardless of concurrency level). I think we should try
more variations of disruptor settings before coming to a conclusion.

[1] -
https://docs.google.com/a/wso2.com/spreadsheets/d/1ck2O7eMkswJSQCgW8ciOkIyZkXGiZhIxRWNN-R9vgMk/edit?usp=sharing

Best Regards,
Samiyuru

On Fri, Mar 11, 2016 at 10:06 AM, Afkham Azeez  wrote:

> I think Samiyuru tested with that nrw worker pool & still the performance
> is unacceptable.
> On Mar 11, 2016 9:45 AM, "Isuru Ranawaka"  wrote:
>
>> Hi ,
>>
>> We have already added  native worker pool for Disruptor and Samiyuru is
>> doing testing on that. We will make disruptor optional as well.
>>
>> thanks
>>
>> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:
>>
>>> Yes, we can make the disruptor optional. Also, we should try using the
>>> native worker pool for Event Handler[1], so that the Disruptor itself runs
>>> the event handler on a worker pool. We'll implement both approaches and do
>>> a comparison.
>>>
>>> [1]
>>> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
>>> .)
>>>
>>> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>>>
 After upgrading to the new transport, we are seeing a significant drop
 in performance for any service that take some time to execute. We have
 tried with the configuration used for the gateway which gave the best
 figures on the same hardware. We have also noted that using a separate
 dedicated executor thread pool, which is supported by Netty, gave much
 better performance than the disruptor based implementation. Even in theory,
 disruptor cannot give better performance when used with a real service that
 does some real work, rather than doing passthrough, for example. Can we
 improve the Netty transport to make going through disruptor optional?

 --
 *Afkham Azeez*
 Director of Architecture; WSO2, Inc.; http://wso2.com
 Member; Apache Software Foundation; http://www.apache.org/
 * *
 *email: **az...@wso2.com* 
 * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
 *http://blog.afkham.org* 
 *twitter: **http://twitter.com/afkham_azeez*
 
 *linked-in: **http://lk.linkedin.com/in/afkhamazeez
 *

 *Lean . Enterprise . Middleware*

>>>
>>>
>>>
>>> --
>>> Kasun Indrasiri
>>> Software Architect
>>> WSO2, Inc.; http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> cell: +94 77 556 5206
>>> Blog : http://kasunpanorama.blogspot.com/
>>>
>>
>>
>>
>> --
>> Best Regards
>> Isuru Ranawaka
>> M: +94714629880
>> Blog : http://isurur.blogspot.com/
>>
>


-- 
Samiyuru Senarathne
*Software Engineer*
Mobile : +94 (0) 71 134 6087
samiy...@wso2.com
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-10 Thread Kasun Indrasiri
Yes, Azeez.
So we are currently testing following approaches, with some processing at
the engine side (rather than doing just pass-thru).

- Transport with netty worker pool(IO) -> message processing worker pool
which does the dispatching/processing of the messages : (No
Disruptor/Making the disruptor optional)
- Transport with Disruptor + Single event handler (GW's default thread
model) : This is the model that gave us the best performance for the GW
scenario(header based routing).
- Transport with Disruptor + native worker pool for event handler.

Based on the outcome of this, we can decide which thread model suits the
best when it comes to msf4j like scenario and we can make it configurable
at the transport side.


On Fri, Mar 11, 2016 at 10:06 AM, Afkham Azeez  wrote:

> I think Samiyuru tested with that nrw worker pool & still the performance
> is unacceptable.
> On Mar 11, 2016 9:45 AM, "Isuru Ranawaka"  wrote:
>
>> Hi ,
>>
>> We have already added  native worker pool for Disruptor and Samiyuru is
>> doing testing on that. We will make disruptor optional as well.
>>
>> thanks
>>
>> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:
>>
>>> Yes, we can make the disruptor optional. Also, we should try using the
>>> native worker pool for Event Handler[1], so that the Disruptor itself runs
>>> the event handler on a worker pool. We'll implement both approaches and do
>>> a comparison.
>>>
>>> [1]
>>> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
>>> .)
>>>
>>> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>>>
 After upgrading to the new transport, we are seeing a significant drop
 in performance for any service that take some time to execute. We have
 tried with the configuration used for the gateway which gave the best
 figures on the same hardware. We have also noted that using a separate
 dedicated executor thread pool, which is supported by Netty, gave much
 better performance than the disruptor based implementation. Even in theory,
 disruptor cannot give better performance when used with a real service that
 does some real work, rather than doing passthrough, for example. Can we
 improve the Netty transport to make going through disruptor optional?

 --
 *Afkham Azeez*
 Director of Architecture; WSO2, Inc.; http://wso2.com
 Member; Apache Software Foundation; http://www.apache.org/
 * *
 *email: **az...@wso2.com* 
 * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
 *http://blog.afkham.org* 
 *twitter: **http://twitter.com/afkham_azeez*
 
 *linked-in: **http://lk.linkedin.com/in/afkhamazeez
 *

 *Lean . Enterprise . Middleware*

>>>
>>>
>>>
>>> --
>>> Kasun Indrasiri
>>> Software Architect
>>> WSO2, Inc.; http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> cell: +94 77 556 5206
>>> Blog : http://kasunpanorama.blogspot.com/
>>>
>>
>>
>>
>> --
>> Best Regards
>> Isuru Ranawaka
>> M: +94714629880
>> Blog : http://isurur.blogspot.com/
>>
>


-- 
Kasun Indrasiri
Software Architect
WSO2, Inc.; http://wso2.com
lean.enterprise.middleware

cell: +94 77 556 5206
Blog : http://kasunpanorama.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-10 Thread Afkham Azeez
I think Samiyuru tested with that nrw worker pool & still the performance
is unacceptable.
On Mar 11, 2016 9:45 AM, "Isuru Ranawaka"  wrote:

> Hi ,
>
> We have already added  native worker pool for Disruptor and Samiyuru is
> doing testing on that. We will make disruptor optional as well.
>
> thanks
>
> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:
>
>> Yes, we can make the disruptor optional. Also, we should try using the
>> native worker pool for Event Handler[1], so that the Disruptor itself runs
>> the event handler on a worker pool. We'll implement both approaches and do
>> a comparison.
>>
>> [1]
>> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
>> .)
>>
>> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>>
>>> After upgrading to the new transport, we are seeing a significant drop
>>> in performance for any service that take some time to execute. We have
>>> tried with the configuration used for the gateway which gave the best
>>> figures on the same hardware. We have also noted that using a separate
>>> dedicated executor thread pool, which is supported by Netty, gave much
>>> better performance than the disruptor based implementation. Even in theory,
>>> disruptor cannot give better performance when used with a real service that
>>> does some real work, rather than doing passthrough, for example. Can we
>>> improve the Netty transport to make going through disruptor optional?
>>>
>>> --
>>> *Afkham Azeez*
>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>> Member; Apache Software Foundation; http://www.apache.org/
>>> * *
>>> *email: **az...@wso2.com* 
>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>> *http://blog.afkham.org* 
>>> *twitter: **http://twitter.com/afkham_azeez*
>>> 
>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>> *
>>>
>>> *Lean . Enterprise . Middleware*
>>>
>>
>>
>>
>> --
>> Kasun Indrasiri
>> Software Architect
>> WSO2, Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> cell: +94 77 556 5206
>> Blog : http://kasunpanorama.blogspot.com/
>>
>
>
>
> --
> Best Regards
> Isuru Ranawaka
> M: +94714629880
> Blog : http://isurur.blogspot.com/
>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-10 Thread Isuru Ranawaka
Hi ,

We have already added  native worker pool for Disruptor and Samiyuru is
doing testing on that. We will make disruptor optional as well.

thanks

On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:

> Yes, we can make the disruptor optional. Also, we should try using the
> native worker pool for Event Handler[1], so that the Disruptor itself runs
> the event handler on a worker pool. We'll implement both approaches and do
> a comparison.
>
> [1]
> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
> .)
>
> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>
>> After upgrading to the new transport, we are seeing a significant drop in
>> performance for any service that take some time to execute. We have tried
>> with the configuration used for the gateway which gave the best figures on
>> the same hardware. We have also noted that using a separate dedicated
>> executor thread pool, which is supported by Netty, gave much better
>> performance than the disruptor based implementation. Even in theory,
>> disruptor cannot give better performance when used with a real service that
>> does some real work, rather than doing passthrough, for example. Can we
>> improve the Netty transport to make going through disruptor optional?
>>
>> --
>> *Afkham Azeez*
>> Director of Architecture; WSO2, Inc.; http://wso2.com
>> Member; Apache Software Foundation; http://www.apache.org/
>> * *
>> *email: **az...@wso2.com* 
>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>> *http://blog.afkham.org* 
>> *twitter: **http://twitter.com/afkham_azeez*
>> 
>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>> *
>>
>> *Lean . Enterprise . Middleware*
>>
>
>
>
> --
> Kasun Indrasiri
> Software Architect
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> cell: +94 77 556 5206
> Blog : http://kasunpanorama.blogspot.com/
>



-- 
Best Regards
Isuru Ranawaka
M: +94714629880
Blog : http://isurur.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-10 Thread Sagara Gunathunga
On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez  wrote:

> No from day 1, we have decided that GW & MSF4J will use the same Netty
> transport component so that the config file will be the same as well as
> improvements made to that transport will be automatically available for
> both products. So now at least for MSF4J, we have issues in using the Netty
> transport in its current state, so we have to fix those issues.
>

Reuse of same config files and components provide an advantage to us as the
F/W developers/maintainers but my question was what are the benefits grant
to end users of MSF4J through Carbon transport ?  I don't think we can
compromise performance numbers for a reason that is more important for F/W
maintainers than end users, IMHO if we continue to use Carbon transport at
least it should perform as same level as vanilla Netty.

Can't we keep Disruptor while improve performance of Carbon transport ?

Thanks !

>
> On Thu, Mar 10, 2016 at 10:22 AM, Sagara Gunathunga 
> wrote:
>
>>
>> When we discuss last week about Carbon transports for MSF4J the main
>> rational we identified was moving to Carbon transport will decouple
>> transport threads from worker threads through Disruptor and provide lot of
>> flexibility and manageability. If disruptor can't give better performance
>> we should skip it no argument about that, but once we skip the disruptor
>> the rational we made last week about thread pool decoupling become invalid
>> if so why MSF4J should depend on Carbon transport ? If Carbon transport
>> can't provide real advantages shouldn't MSF4J depend on vanilla Netty
>> transports and make thing more lightweight ?
>>
>> Thanks !
>>
>> On Thu, Mar 10, 2016 at 9:50 AM, Afkham Azeez  wrote:
>>
>>> We need to do this fast because this task has taken close to 3 weeks now.
>>>
>>> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:
>>>
 Yes, we can make the disruptor optional. Also, we should try using the
 native worker pool for Event Handler[1], so that the Disruptor itself runs
 the event handler on a worker pool. We'll implement both approaches and do
 a comparison.

 [1]
 https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
 .)

 On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:

> After upgrading to the new transport, we are seeing a significant drop
> in performance for any service that take some time to execute. We have
> tried with the configuration used for the gateway which gave the best
> figures on the same hardware. We have also noted that using a separate
> dedicated executor thread pool, which is supported by Netty, gave much
> better performance than the disruptor based implementation. Even in 
> theory,
> disruptor cannot give better performance when used with a real service 
> that
> does some real work, rather than doing passthrough, for example. Can we
> improve the Netty transport to make going through disruptor optional?
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> 
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>



 --
 Kasun Indrasiri
 Software Architect
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 cell: +94 77 556 5206
 Blog : http://kasunpanorama.blogspot.com/

>>>
>>>
>>>
>>> --
>>> *Afkham Azeez*
>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>> Member; Apache Software Foundation; http://www.apache.org/
>>> * *
>>> *email: **az...@wso2.com* 
>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>> *http://blog.afkham.org* 
>>> *twitter: **http://twitter.com/afkham_azeez*
>>> 
>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>> *
>>>
>>> *Lean . Enterprise . Middleware*
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Sagara Gunathunga
>>
>> Architect; WSO2, Inc.;  http://wso2.com
>> V.P Apache Web Services;http://ws.apache.org/
>> Linkedin; http://www.linkedin.com/in/ssagara
>> Blog ;  http://ssagara.blogspot.com
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-09 Thread Isuru Udana
If we have issues with disruptor, those issues are equally affecting GW or
GW based servers for most of the real use cases. So fixing that in the
proper way and continue to use the same architecture for both GW and MSF4J
looks ideal to me.

On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez  wrote:

> No from day 1, we have decided that GW & MSF4J will use the same Netty
> transport component so that the config file will be the same as well as
> improvements made to that transport will be automatically available for
> both products. So now at least for MSF4J, we have issues in using the Netty
> transport in its current state, so we have to fix those issues.
>
> On Thu, Mar 10, 2016 at 10:22 AM, Sagara Gunathunga 
> wrote:
>
>>
>> When we discuss last week about Carbon transports for MSF4J the main
>> rational we identified was moving to Carbon transport will decouple
>> transport threads from worker threads through Disruptor and provide lot of
>> flexibility and manageability. If disruptor can't give better performance
>> we should skip it no argument about that, but once we skip the disruptor
>> the rational we made last week about thread pool decoupling become invalid
>> if so why MSF4J should depend on Carbon transport ? If Carbon transport
>> can't provide real advantages shouldn't MSF4J depend on vanilla Netty
>> transports and make thing more lightweight ?
>>
>> Thanks !
>>
>> On Thu, Mar 10, 2016 at 9:50 AM, Afkham Azeez  wrote:
>>
>>> We need to do this fast because this task has taken close to 3 weeks now.
>>>
>>> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:
>>>
 Yes, we can make the disruptor optional. Also, we should try using the
 native worker pool for Event Handler[1], so that the Disruptor itself runs
 the event handler on a worker pool. We'll implement both approaches and do
 a comparison.

 [1]
 https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
 .)

 On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:

> After upgrading to the new transport, we are seeing a significant drop
> in performance for any service that take some time to execute. We have
> tried with the configuration used for the gateway which gave the best
> figures on the same hardware. We have also noted that using a separate
> dedicated executor thread pool, which is supported by Netty, gave much
> better performance than the disruptor based implementation. Even in 
> theory,
> disruptor cannot give better performance when used with a real service 
> that
> does some real work, rather than doing passthrough, for example. Can we
> improve the Netty transport to make going through disruptor optional?
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> 
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>



 --
 Kasun Indrasiri
 Software Architect
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 cell: +94 77 556 5206
 Blog : http://kasunpanorama.blogspot.com/

>>>
>>>
>>>
>>> --
>>> *Afkham Azeez*
>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>> Member; Apache Software Foundation; http://www.apache.org/
>>> * *
>>> *email: **az...@wso2.com* 
>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>> *http://blog.afkham.org* 
>>> *twitter: **http://twitter.com/afkham_azeez*
>>> 
>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>> *
>>>
>>> *Lean . Enterprise . Middleware*
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Sagara Gunathunga
>>
>> Architect; WSO2, Inc.;  http://wso2.com
>> V.P Apache Web Services;http://ws.apache.org/
>> Linkedin; http://www.linkedin.com/in/ssagara
>> Blog ;  http://ssagara.blogspot.com
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%

Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-09 Thread Afkham Azeez
No from day 1, we have decided that GW & MSF4J will use the same Netty
transport component so that the config file will be the same as well as
improvements made to that transport will be automatically available for
both products. So now at least for MSF4J, we have issues in using the Netty
transport in its current state, so we have to fix those issues.

On Thu, Mar 10, 2016 at 10:22 AM, Sagara Gunathunga  wrote:

>
> When we discuss last week about Carbon transports for MSF4J the main
> rational we identified was moving to Carbon transport will decouple
> transport threads from worker threads through Disruptor and provide lot of
> flexibility and manageability. If disruptor can't give better performance
> we should skip it no argument about that, but once we skip the disruptor
> the rational we made last week about thread pool decoupling become invalid
> if so why MSF4J should depend on Carbon transport ? If Carbon transport
> can't provide real advantages shouldn't MSF4J depend on vanilla Netty
> transports and make thing more lightweight ?
>
> Thanks !
>
> On Thu, Mar 10, 2016 at 9:50 AM, Afkham Azeez  wrote:
>
>> We need to do this fast because this task has taken close to 3 weeks now.
>>
>> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:
>>
>>> Yes, we can make the disruptor optional. Also, we should try using the
>>> native worker pool for Event Handler[1], so that the Disruptor itself runs
>>> the event handler on a worker pool. We'll implement both approaches and do
>>> a comparison.
>>>
>>> [1]
>>> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
>>> .)
>>>
>>> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>>>
 After upgrading to the new transport, we are seeing a significant drop
 in performance for any service that take some time to execute. We have
 tried with the configuration used for the gateway which gave the best
 figures on the same hardware. We have also noted that using a separate
 dedicated executor thread pool, which is supported by Netty, gave much
 better performance than the disruptor based implementation. Even in theory,
 disruptor cannot give better performance when used with a real service that
 does some real work, rather than doing passthrough, for example. Can we
 improve the Netty transport to make going through disruptor optional?

 --
 *Afkham Azeez*
 Director of Architecture; WSO2, Inc.; http://wso2.com
 Member; Apache Software Foundation; http://www.apache.org/
 * *
 *email: **az...@wso2.com* 
 * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
 *http://blog.afkham.org* 
 *twitter: **http://twitter.com/afkham_azeez*
 
 *linked-in: **http://lk.linkedin.com/in/afkhamazeez
 *

 *Lean . Enterprise . Middleware*

>>>
>>>
>>>
>>> --
>>> Kasun Indrasiri
>>> Software Architect
>>> WSO2, Inc.; http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> cell: +94 77 556 5206
>>> Blog : http://kasunpanorama.blogspot.com/
>>>
>>
>>
>>
>> --
>> *Afkham Azeez*
>> Director of Architecture; WSO2, Inc.; http://wso2.com
>> Member; Apache Software Foundation; http://www.apache.org/
>> * *
>> *email: **az...@wso2.com* 
>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>> *http://blog.afkham.org* 
>> *twitter: **http://twitter.com/afkham_azeez*
>> 
>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>> *
>>
>> *Lean . Enterprise . Middleware*
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Sagara Gunathunga
>
> Architect; WSO2, Inc.;  http://wso2.com
> V.P Apache Web Services;http://ws.apache.org/
> Linkedin; http://www.linkedin.com/in/ssagara
> Blog ;  http://ssagara.blogspot.com
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Afkham Azeez*
Director of Architecture; WSO2, Inc.; http://wso2.com
Member; Apache Software Foundation; http://www.apache.org/
* *
*email: **az...@wso2.com* 
* cell: +94 77 3320919blog: **http://blog.afkham.org*

*twitter: **http://twitter.com/afkham_azeez*

*linked-in: **http://lk.linkedin.com/in/afkhamazeez
*

*Lean . Enterprise . Middleware*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-09 Thread Sagara Gunathunga
When we discuss last week about Carbon transports for MSF4J the main
rational we identified was moving to Carbon transport will decouple
transport threads from worker threads through Disruptor and provide lot of
flexibility and manageability. If disruptor can't give better performance
we should skip it no argument about that, but once we skip the disruptor
the rational we made last week about thread pool decoupling become invalid
if so why MSF4J should depend on Carbon transport ? If Carbon transport
can't provide real advantages shouldn't MSF4J depend on vanilla Netty
transports and make thing more lightweight ?

Thanks !

On Thu, Mar 10, 2016 at 9:50 AM, Afkham Azeez  wrote:

> We need to do this fast because this task has taken close to 3 weeks now.
>
> On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:
>
>> Yes, we can make the disruptor optional. Also, we should try using the
>> native worker pool for Event Handler[1], so that the Disruptor itself runs
>> the event handler on a worker pool. We'll implement both approaches and do
>> a comparison.
>>
>> [1]
>> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
>> .)
>>
>> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>>
>>> After upgrading to the new transport, we are seeing a significant drop
>>> in performance for any service that take some time to execute. We have
>>> tried with the configuration used for the gateway which gave the best
>>> figures on the same hardware. We have also noted that using a separate
>>> dedicated executor thread pool, which is supported by Netty, gave much
>>> better performance than the disruptor based implementation. Even in theory,
>>> disruptor cannot give better performance when used with a real service that
>>> does some real work, rather than doing passthrough, for example. Can we
>>> improve the Netty transport to make going through disruptor optional?
>>>
>>> --
>>> *Afkham Azeez*
>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>> Member; Apache Software Foundation; http://www.apache.org/
>>> * *
>>> *email: **az...@wso2.com* 
>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>> *http://blog.afkham.org* 
>>> *twitter: **http://twitter.com/afkham_azeez*
>>> 
>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>> *
>>>
>>> *Lean . Enterprise . Middleware*
>>>
>>
>>
>>
>> --
>> Kasun Indrasiri
>> Software Architect
>> WSO2, Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> cell: +94 77 556 5206
>> Blog : http://kasunpanorama.blogspot.com/
>>
>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> 
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Sagara Gunathunga

Architect; WSO2, Inc.;  http://wso2.com
V.P Apache Web Services;http://ws.apache.org/
Linkedin; http://www.linkedin.com/in/ssagara
Blog ;  http://ssagara.blogspot.com
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-09 Thread Afkham Azeez
We need to do this fast because this task has taken close to 3 weeks now.

On Thu, Mar 10, 2016 at 9:29 AM, Kasun Indrasiri  wrote:

> Yes, we can make the disruptor optional. Also, we should try using the
> native worker pool for Event Handler[1], so that the Disruptor itself runs
> the event handler on a worker pool. We'll implement both approaches and do
> a comparison.
>
> [1]
> https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
> .)
>
> On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:
>
>> After upgrading to the new transport, we are seeing a significant drop in
>> performance for any service that take some time to execute. We have tried
>> with the configuration used for the gateway which gave the best figures on
>> the same hardware. We have also noted that using a separate dedicated
>> executor thread pool, which is supported by Netty, gave much better
>> performance than the disruptor based implementation. Even in theory,
>> disruptor cannot give better performance when used with a real service that
>> does some real work, rather than doing passthrough, for example. Can we
>> improve the Netty transport to make going through disruptor optional?
>>
>> --
>> *Afkham Azeez*
>> Director of Architecture; WSO2, Inc.; http://wso2.com
>> Member; Apache Software Foundation; http://www.apache.org/
>> * *
>> *email: **az...@wso2.com* 
>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>> *http://blog.afkham.org* 
>> *twitter: **http://twitter.com/afkham_azeez*
>> 
>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>> *
>>
>> *Lean . Enterprise . Middleware*
>>
>
>
>
> --
> Kasun Indrasiri
> Software Architect
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> cell: +94 77 556 5206
> Blog : http://kasunpanorama.blogspot.com/
>



-- 
*Afkham Azeez*
Director of Architecture; WSO2, Inc.; http://wso2.com
Member; Apache Software Foundation; http://www.apache.org/
* *
*email: **az...@wso2.com* 
* cell: +94 77 3320919blog: **http://blog.afkham.org*

*twitter: **http://twitter.com/afkham_azeez*

*linked-in: **http://lk.linkedin.com/in/afkhamazeez
*

*Lean . Enterprise . Middleware*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-09 Thread Kasun Indrasiri
Yes, we can make the disruptor optional. Also, we should try using the
native worker pool for Event Handler[1], so that the Disruptor itself runs
the event handler on a worker pool. We'll implement both approaches and do
a comparison.

[1]
https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html#handleEventsWithWorkerPool(com.lmax.disruptor.WorkHandler..
.)

On Thu, Mar 10, 2016 at 8:48 AM, Afkham Azeez  wrote:

> After upgrading to the new transport, we are seeing a significant drop in
> performance for any service that take some time to execute. We have tried
> with the configuration used for the gateway which gave the best figures on
> the same hardware. We have also noted that using a separate dedicated
> executor thread pool, which is supported by Netty, gave much better
> performance than the disruptor based implementation. Even in theory,
> disruptor cannot give better performance when used with a real service that
> does some real work, rather than doing passthrough, for example. Can we
> improve the Netty transport to make going through disruptor optional?
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> 
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>



-- 
Kasun Indrasiri
Software Architect
WSO2, Inc.; http://wso2.com
lean.enterprise.middleware

cell: +94 77 556 5206
Blog : http://kasunpanorama.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-09 Thread Afkham Azeez
After upgrading to the new transport, we are seeing a significant drop in
performance for any service that take some time to execute. We have tried
with the configuration used for the gateway which gave the best figures on
the same hardware. We have also noted that using a separate dedicated
executor thread pool, which is supported by Netty, gave much better
performance than the disruptor based implementation. Even in theory,
disruptor cannot give better performance when used with a real service that
does some real work, rather than doing passthrough, for example. Can we
improve the Netty transport to make going through disruptor optional?

-- 
*Afkham Azeez*
Director of Architecture; WSO2, Inc.; http://wso2.com
Member; Apache Software Foundation; http://www.apache.org/
* *
*email: **az...@wso2.com* 
* cell: +94 77 3320919blog: **http://blog.afkham.org*

*twitter: **http://twitter.com/afkham_azeez*

*linked-in: **http://lk.linkedin.com/in/afkhamazeez
*

*Lean . Enterprise . Middleware*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture