Cool. When are you guys planning to release the generalized component?
On Fri, Dec 2, 2016 at 10:57 AM, Anjana Fernando wrote:
> Hi guys,
>
> So the generalized coordination component was done by SameeraR, and the
> discussions for that can be seen at [1] and [2]. We've
Currently we are in the review process of the code and making slight
adjustments to the algorithm too. Probably things will be finalized early
next week and then we can work on putting it in a common repo.
On Fri, Dec 2, 2016 at 11:15 AM, Asanka Abeyweera wrote:
> Cool. When
+1
On Mon, Nov 7, 2016 at 12:40 PM, Anjana Fernando wrote:
> Hi Ramith,
>
> Sure. Actually, I was talking with SameeraR to take over this and create a
> common component which has the required coordination functionality. The
> idea is to create a component, where the providers
Hi Ramith,
Sure. Actually, I was talking with SameeraR to take over this and create a
common component which has the required coordination functionality. The
idea is to create a component, where the providers can be plugged in, such
as the RDBMS based one, ZK, or any other container specific
this might require some work.. shall we have a chat?
On Thu, Nov 3, 2016 at 3:52 PM, Anjana Fernando wrote:
> Ping! ..
>
> On Wed, Nov 2, 2016 at 5:03 PM, Anjana Fernando wrote:
>
>> Hi,
>>
>> On Wed, Nov 2, 2016 at 3:14 PM, Asanka Abeyweera
Ping! ..
On Wed, Nov 2, 2016 at 5:03 PM, Anjana Fernando wrote:
> Hi,
>
> On Wed, Nov 2, 2016 at 3:14 PM, Asanka Abeyweera
> wrote:
>
>> Hi Anjana,
>>
>> Currently, the implementation is part of the MB code (not a common
>> component).
>>
>
> Okay, can we
Hi,
On Wed, Nov 2, 2016 at 3:14 PM, Asanka Abeyweera wrote:
> Hi Anjana,
>
> Currently, the implementation is part of the MB code (not a common
> component).
>
Okay, can we please get it as a common component.
Cheers,
Anjana.
>
> On Wed, Nov 2, 2016 at 3:00 PM, Anjana
Hi Anjana,
Currently, the implementation is part of the MB code (not a common
component).
On Wed, Nov 2, 2016 at 3:00 PM, Anjana Fernando wrote:
> Hi Asanka/Ramith,
>
> So for C5 based Streaming Analytics solution, we need coordination
> functionality there as well. Is the
Hi Asanka/Ramith,
So for C5 based Streaming Analytics solution, we need coordination
functionality there as well. Is the functionality mentioned here created as
a common component or baked in to the MB code? .. if so, can we please get
it implemented it as a generic component, so other products
Hi Maninda,
Locking the database will be supported by some databases but there will
be huge performance impact. So we cannot use that approach. If this
approach cannot be adapted the only thing we can do is queue wise load
balancing through slot coordinator. But in this case we cannot
Hi Sajini,
Yes that is what I meant. As the number of slots are proportional to the
number of messages passing through the cluster, slot delivery should not be
handled by the coordinator when there is only one coordinator in the
cluster which is a bottleneck for scaling messages passing through
Hi Maninda,
On Fri, Aug 5, 2016 at 2:28 PM, Maninda Edirisooriya
wrote:
> @Sajini,
>
> But the number of slots are proportional to the number of messages pass
> through the MB which needs to be handled by the coordinator. That is what I
> meant by "information related to meta
@Sajini,
But the number of slots are proportional to the number of messages pass
through the MB which needs to be handled by the coordinator. That is what I
meant by "information related to meta data of messages pass through a
single coordinator". Ideally after the senders and receivers are
@Imesh,
We can prove that doing leader election using a lib (where we maintain
cluster state in another place, a.k.a DB) will not solve our original
problem (this also relates to our past experience with both the zookeeper
and hazelcast).
We can make this implementation a common component if
Hi Maninda,
We are not using one coordinator to send and receive messages. All the
nodes in the cluster can receive and send messages to MB and messages will
be written to database by multiple nodes. Also messages will be read from
the database by multiple nodes. In MB we have a concept called
On Fri, Aug 5, 2016 at 12:00 PM, Hasitha Hiranya wrote:
> Hi,
>
>
> On Fri, Aug 5, 2016 at 11:31 AM, Akila Ravihansa Perera <
> raviha...@wso2.com> wrote:
>
>> Hi,
>>
>> I think the original problem here is that MB needs to absolutely
>> guarantee the integrity of the data
Hi Imesh,
On Fri, Aug 5, 2016 at 7:33 AM, Imesh Gunaratne wrote:
>
>
> On Fri, Aug 5, 2016 at 7:31 AM, Imesh Gunaratne wrote:
>>
>>
>> You can see here [3] how K8S has implemented leader election feature for
>> the products deployed on top of that to utilize.
>>
Hi Imesh,
On Fri, Aug 5, 2016 at 7:33 AM, Imesh Gunaratne wrote:
>
>
> On Fri, Aug 5, 2016 at 7:31 AM, Imesh Gunaratne wrote:
>>
>>
>> You can see here [3] how K8S has implemented leader election feature for
>> the products deployed on top of that to utilize.
>>
On Fri, Aug 5, 2016 at 7:31 AM, Imesh Gunaratne wrote:
>
>
> You can see here [3] how K8S has implemented leader election feature for
> the products deployed on top of that to utilize.
>
​Correction: Please refer [4].​
>
>
>> On Thu, Aug 4, 2016 at 7:27 PM, Asanka Abeyweera
Leader election is currently based on hazelcast and things get complicated
when a network partition happens. if the node looses access to database and
the others in the cluster that's comparatively safe ( when nodes are not
incurring moderate load).
Now the problem really is in situations where
Hi Imesh,
We are not implementing this to overcome a limitation in the coordination
algorithm available in the Hazlecast. We are implementing this since we
need an RDBMS based coordination algorithm (not a network based algorithm).
The reason is, a network based election algorithm will always
Hi Asanka,
Do we really need to implement a leader election algorithm on our own?
AFAIU this is a complex problem which has been already solved by several
algorithms [1]. IMO it would be better to go ahead with an existing well
established implementation on etcd [1] or Consul [2].
Those provide
Hi Maninda,
Since we are using RDBMS to poll the node status, the cluster will not end
up in situation 1,2 or 3. With this approach we consider a node unreachable
when it cannot access the database. Therefore an unreachable node can never
be the leader.
As you have mentioned, we are currently
Hi Akila,
Let me explain the issue in a different way. Let's assume the MB nodes are
using two different network interfaces for Hazelcast communication and
database communication. With such a configuration, there can be failures
only in the network interface used for Hazelcast communication in
Hi,
What's the advantage of using RDBMS (even as an alternative) to implement a
leader/coordinator election? If the network connection to DB fails then
this will be a single point of failure. I don't think we can scale RDBMS
instances and expect the election algorithm to work. That would be
+1 to make it a common component . We have the clustering implementation
for BPEL component based on hazelcast. If the coordination is available at
RDBMS level, we can remove hazelcast dependancy.
Regards
Nandika
On Thu, Jul 28, 2016 at 1:28 PM, Hasitha Aravinda wrote:
> Can
Can we make it a common component, which is not hard coupled with MB. BPS
has the same requirement.
Thanks,
Hasitha.
On Thu, Jul 28, 2016 at 9:47 AM, Asanka Abeyweera wrote:
> Hi All,
>
> In MB, we have used a coordinator based approach to manage distributed
> messaging
Hi All,
In MB, we have used a coordinator based approach to manage distributed
messaging algorithm in the cluster. Currently Hazelcast is used to elect
the coordinator. But one issue we faced with Hazelcast is, during a network
segmentation (split brain), Hazelcast can elect two or more
28 matches
Mail list logo