On 01/18/2017 01:07 PM, Parthasarathy Bhuvaragan wrote:
> On 01/18/2017 11:06 AM, Ying Xue wrote:
>> On 01/16/2017 11:41 PM, Parthasarathy Bhuvaragan wrote:
>>> Until now, the subscribers keep track of the subscriptions using
>>> reference count at subscriber level. At subscription cancel or
>>>
As a further preparation for the upcoming 'replicast' functionality,
we add some necessary structs and functions for looking up and returning
a list of all nodes that host destinations for a given multicast message.
Reviewed-by: Parthasarathy Bhuvaragan
TIPC multicast messages are currently distributed via L2 broadcast
or IP multicast to all nodes in the cluster, irrespective of the
number of real destinations of the message.
In this series we introduce an option to transport messages via
replication ("replicast") across a selected number of
As a preparation for the 'replicast' functionality we are going to
introduce in the next commits, we need the broadcast base structure to
store whether bearer broadcast is available at all from the currently
used bearer or bearers.
We do this by adding a new function tipc_bearer_bcast_support()
If the bearer carrying multicast messages supports broadcast, those
messages will be sent to all cluster nodes, irrespective of whether
these nodes host any actual destinations socket or not. This is clearly
wasteful if the cluster is large and there are only a few real
destinations for the
On 01/18/2017 11:30 AM, Xue, Ying wrote:
> Hi John,
>
>
>
> Thank you for the testing.
>
>
>
> I think your suggestion is reasonable. But we need to find out its exact
> scenario. Regarding the following message, after one object refcnt is
> decreased to zero by one thread, another thread tries to
Until now, the subscribers keep track of the subscriptions using
reference count at subscriber level. At subscription cancel or
subscriber delete, we delete the subscription by checking for
pending timer using del_timer(). del_timer() is not SMP safe, if
on CPU0 the check for pending timer returns
In tipc_server_stop(), we iterate over the connections with limiting
factor as server's idr_in_use. We ignore the fact that this variable
is decremented in tipc_close_conn(), leading to premature exit.
In this commit, we iterate until the we have no connections left.
Acked-by: Ying Xue
Commit 333f796235a527 ("tipc: fix a race condition leading to
subscriber refcnt bug") reveals a soft lockup while acquiring
nametbl_lock.
Before commit 333f796235a527, we call tipc_conn_shutdown() from
tipc_close_conn() in the context of tipc_topsrv_stop(). In that
context, we are allowed to grab
In tipc_conn_sendmsg(), we first queue the request to the outqueue
followed by the connection state check. If the connection is not
connected, we should not queue this message.
In this commit, we reject the messages if the connection state is
not CF_CONNECTED.
Acked-by: Ying Xue
We trigger a soft lockup as we grab nametbl_lock twice if the node
has a pending node up/down or link up/down event while:
- we process an incoming named message in tipc_named_rcv() and
perform an tipc_update_nametbl().
- we have pending backlog items in the name distributor queue
during a
In this series, we revert the commit 333f796235a527 ("tipc: fix a
race condition leading to subscriber refcnt bug") and provide an
alternate solution to fix the race conditions in commits 2-4.
We have to do this as the above commit introduced a nametbl soft
lockup at module exit as described by
On 01/18/2017 11:06 AM, Ying Xue wrote:
> On 01/16/2017 11:41 PM, Parthasarathy Bhuvaragan wrote:
>> Until now, the subscribers keep track of the subscriptions using
>> reference count at subscriber level. At subscription cancel or
>> subscriber delete, we delete the subscription by checking for
Hi John,
Thank you for the testing.
I think your suggestion is reasonable. But we need to find out its exact
scenario. Regarding the following message, after one object refcnt is decreased
to zero by one thread, another thread tries to increment its refcnt, which
means that we have a race
On 01/16/2017 11:41 PM, Parthasarathy Bhuvaragan wrote:
> Until now, the subscribers keep track of the subscriptions using
> reference count at subscriber level. At subscription cancel or
> subscriber delete, we delete the subscription by checking for
> pending timer using del_timer(). del_timer()
15 matches
Mail list logo