On 12/09/2016 09:25 PM, Butler, Peter wrote:
> We can certainly do that for future upgrades of our customers. However we
> may need to just patch in the interim.
>
>
> Is the patch small enough (self-contained enough) that it would be easy
> enough for me to port it into our 4.4.0 kernel? Or do
Thanks - just testing it out now and so far all looks good.
From: Parthasarathy Bhuvaragan
Sent: Monday, December 12, 2016 3:20:51 AM
To: tipc-discussion@lists.sourceforge.net
Subject: Re: [tipc-discussion] reproducible link failure scenario
On 12/09/2016 09:25 P
On 12/12/2016 03:53 PM, XIANG Haiming wrote:
> Hi Ying,
>
> I try to convert your email([PATCH 3.10 04/17] tipc: don't use memcpy to copy
> from user space) to patch file 3.patch.
>
> I use command " patch -p0 < /root/3.patch" , there is the follow error:
>
> patching file net/tipc/msg.c
> Hunk #
Hi Parth,
Sorry for late response.
As I could not find your v3 version, I just give comments based on the
version.
On 11/22/2016 12:27 AM, Parthasarathy Bhuvaragan wrote:
> Commit 333f796235a527 ("tipc: fix a race condition leading to
> subscriber refcnt bug") reveals a soft lockup while acquir
Hi Ying,
I tested with the 3 patches applied:
1/3: tipc: fix nametbl_lock soft lockup at node/link events
2/3: tipc: fix nametbl_lock soft lockup at module exit
3/3: tipc: move connection cleanup to a workqueue
In my case the soft lockup I was seeing was resolved by patch 3 ("tipc:
move connectio
During multicast reception we currently use a simple linked list with
push/pop semantics to store port numbers.
We now see a need for a more generic list for storing values of type
u32. We therefore make some modifications to this list, while replacing
the prefix 'tipc_plist_' with 'u32_'. We also
The functions tipc_wait_for_sndpkt() and tipc_wait_for_sndmsg() are very
similar. The latter function is also called from two locations, and
there will be more in the coming commits, which will all need to test on
different conditions.
Instead of making yet another duplicates of the function, we n
The socket code currently handles link congestion by either blocking
and trying to send again when the congestion has abated, or just
returning to the user with -EAGAIN and let him re-try later.
This mechanism is prone to starvation, because the wakeup algorithm is
non-atomic. During the time the
We fix a very real starvation problem that may occur when a link
encounters send buffer congestion. At the same time we make the
interaction between the socket and link layer simpler and more
consistent.
v2: - Simplified link congestion check to only check against own
importance limit. Thi