Re: [tipc-discussion] TIPC link statistic

2016-11-28 Thread Holger Brunck
Hi Jon,

On 24/11/16 17:08, Jon Maloy wrote:
>> On my embedded PPC board (kernel 4.4, Server):
>>> tipc link stat show
>> Link statistics:
>>
>> Link 
>>   Window:50 packets
>>   RX packets:1 fragments:0/0 bundles:0/0
>>   TX packets:1 fragments:0/0 bundles:0/0
>>   RX naks:0 defs:0 dups:0
>>   TX naks:0 acks:0 dups:0
>>   Congestion link:0  Send queue max:0 avg:0
>>
>> Link <1.1.11:eth1-1.1.3:p4p1>
>>   STANDBY  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
>>   RX packets:0 fragments:0/0 bundles:0/0
>>   TX packets:2 fragments:0/0 bundles:0/0
>>   TX profile sample:2 packets  average:60 octets
>>   0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
>>   RX states:378 probes:188 naks:0 defs:0 dups:0
>>   TX states:384 probes:189 naks:0 acks:7 dups:0
>>   Congestion link:0  Send queue max:0 avg:0
>>
>> On the server side the packets doesn't appear at all in the link statistic, 
>> but
>> they were definetely received from the server.
>>
>> Does anyone has an idea whats wrong here?
> 
> Yes, I noticed this a while ago, but I haven't had time to look into it. It 
> is clearly a bug, probably introduced by me during my last update of the 
> node/link layer, and I am sure it easy to fix. As a matter of fact, I have 
> realized this is becoming urgent even for my own company, as we are planning 
> to use this statistics to measure media quality.
> 
> I'll try to find time for it  during the coming week. (Unless somebody is 
> volunteering of course).
> 

I saw your patch "tipc: fix link statistics counter errors". I assume it should
tackle this issue? I gave it a try with kernel 4.9.0-rc7 on my kmeter1 board
which is a 32 bit powerpc board. Unfortunately the counters are still wrong in
the link statistic. Received packets don't appear at all and transmitted
packages to a remote node are accounted on the broadcast link.

Best regards
Holger

--
___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] TIPC link statistic

2016-11-28 Thread Jon Maloy


> -Original Message-
> From: Holger Brunck [mailto:holger.bru...@keymile.com]
> Sent: Monday, 28 November, 2016 07:41
> To: Jon Maloy ; tipc-discussion@lists.sourceforge.net
> Subject: Re: [tipc-discussion] TIPC link statistic
> 
> Hi Jon,
> 
> On 24/11/16 17:08, Jon Maloy wrote:
> >> On my embedded PPC board (kernel 4.4, Server):
> >>> tipc link stat show
> >> Link statistics:
> >>

...

> >
> > I'll try to find time for it  during the coming week. (Unless somebody is
> volunteering of course).
> >
> 
> I saw your patch "tipc: fix link statistics counter errors". I assume it 
> should
> tackle this issue? I gave it a try with kernel 4.9.0-rc7 on my kmeter1 board
> which is a 32 bit powerpc board. Unfortunately the counters are still wrong in
> the link statistic. Received packets don't appear at all and transmitted
> packages to a remote node are accounted on the broadcast link.

I believe you are talking only about the broadcast link here? The figures for 
broadcast reception are currently missing by design, i.e., they have always 
been missing. We would need to scan across all broadcast reception links (on 
the contrary, there is only one broadcast transmission link, which makes that 
task easy) and accumulate all values, as well as presenting the figures for the 
individual links. It is not a particularly big or difficult task, but it is 
certainly more than the small bug corrections I just delivered. I cannot 
prioritize this myself right now.

BR
///jon

> 
> Best regards
> Holger

--
___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] TIPC link statistic

2016-11-28 Thread Holger Brunck
Hi Jon,

On 28/11/16 14:53, Jon Maloy wrote:
>> I saw your patch "tipc: fix link statistics counter errors". I assume it 
>> should
>> > tackle this issue? I gave it a try with kernel 4.9.0-rc7 on my kmeter1 
>> > board
>> > which is a 32 bit powerpc board. Unfortunately the counters are still 
>> > wrong in
>> > the link statistic. Received packets don't appear at all and transmitted
>> > packages to a remote node are accounted on the broadcast link.
> I believe you are talking only about the broadcast link here? The figures for
> broadcast reception are currently missing by design, i.e., they have always
> been missing. We would need to scan across all broadcast reception links (on
> the contrary, there is only one broadcast transmission link, which makes that
> task easy) and accumulate all values, as well as presenting the figures for
> the individual links. It is not a particularly big or difficult task, but it
> is certainly more than the small bug corrections I just delivered. I cannot
> prioritize this myself right now.

no I am not talking about the broadcast link in particular, it was only another
thing I noticed.

I have a TIPC link between two ethernet ports and I send packets connectionless
from a client to a server running on the other side of the link.  And what I
still see is that the RX and TX counter are not increasing in the link
statistic. After sending 300 packets with a size of 10kB I see:

Link <1.1.9:eth2-1.1.211:eth1>
  ACTIVE  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
  RX packets:6 fragments:0/0 bundles:0/0
  TX packets:4 fragments:0/0 bundles:0/0
  TX profile sample:2 packets  average:60 octets
  0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
  RX states:17978 probes:368 naks:0 defs:2 dups:2
  TX states:17772 probes:17386 naks:2 acks:16 dups:0
  Congestion link:0  Send queue max:0 avg:0

I just wanted to know that this is a known bug or if there is something wrong in
my setup.

Best regards
Holger

--
___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] TIPC link statistic

2016-11-28 Thread Jon Maloy


> -Original Message-
> From: Holger Brunck [mailto:holger.bru...@keymile.com]
> Sent: Monday, 28 November, 2016 09:22
> To: Jon Maloy ; tipc-discussion@lists.sourceforge.net
> Subject: Re: [tipc-discussion] TIPC link statistic
> 
> Hi Jon,
> 
> On 28/11/16 14:53, Jon Maloy wrote:
> >> I saw your patch "tipc: fix link statistics counter errors". I assume it 
> >> should
> >> > tackle this issue? I gave it a try with kernel 4.9.0-rc7 on my kmeter1 
> >> > board
> >> > which is a 32 bit powerpc board. Unfortunately the counters are still 
> >> > wrong
> in
> >> > the link statistic. Received packets don't appear at all and transmitted
> >> > packages to a remote node are accounted on the broadcast link.
> > I believe you are talking only about the broadcast link here? The figures 
> > for
> > broadcast reception are currently missing by design, i.e., they have always
> > been missing. We would need to scan across all broadcast reception links (on
> > the contrary, there is only one broadcast transmission link, which makes 
> > that
> > task easy) and accumulate all values, as well as presenting the figures for
> > the individual links. It is not a particularly big or difficult task, but it
> > is certainly more than the small bug corrections I just delivered. I cannot
> > prioritize this myself right now.
> 
> no I am not talking about the broadcast link in particular, it was only 
> another
> thing I noticed.
> 
> I have a TIPC link between two ethernet ports and I send packets 
> connectionless
> from a client to a server running on the other side of the link.  And what I
> still see is that the RX and TX counter are not increasing in the link
> statistic. After sending 300 packets with a size of 10kB I see:
> 
> Link <1.1.9:eth2-1.1.211:eth1>
>   ACTIVE  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
>   RX packets:6 fragments:0/0 bundles:0/0
>   TX packets:4 fragments:0/0 bundles:0/0
>   TX profile sample:2 packets  average:60 octets
>   0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
>   RX states:17978 probes:368 naks:0 defs:2 dups:2
>   TX states:17772 probes:17386 naks:2 acks:16 dups:0
>   Congestion link:0  Send queue max:0 avg:0
> 
> I just wanted to know that this is a known bug or if there is something wrong 
> in
> my setup.
> 
> Best regards
> Holger

The explanation is simple: the patch is not applied on net-next yet, only on 
net. It normally takes a few days before David re-applies fixes to net back to 
net-next. Since you anyway checked out net-next, you could try to apply the 
patch yourself.

///jon


--
___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] TIPC link statistic

2016-11-28 Thread Holger Brunck
On 28/11/16 15:40, Jon Maloy wrote:
> 
> 
>> -Original Message-
>> From: Holger Brunck [mailto:holger.bru...@keymile.com]
>> Sent: Monday, 28 November, 2016 09:22
>> To: Jon Maloy ; tipc-discussion@lists.sourceforge.net
>> Subject: Re: [tipc-discussion] TIPC link statistic
>>
>> Hi Jon,
>>
>> On 28/11/16 14:53, Jon Maloy wrote:
 I saw your patch "tipc: fix link statistics counter errors". I assume it 
 should
> tackle this issue? I gave it a try with kernel 4.9.0-rc7 on my kmeter1 
> board
> which is a 32 bit powerpc board. Unfortunately the counters are still 
> wrong
>> in
> the link statistic. Received packets don't appear at all and transmitted
> packages to a remote node are accounted on the broadcast link.
>>> I believe you are talking only about the broadcast link here? The figures 
>>> for
>>> broadcast reception are currently missing by design, i.e., they have always
>>> been missing. We would need to scan across all broadcast reception links (on
>>> the contrary, there is only one broadcast transmission link, which makes 
>>> that
>>> task easy) and accumulate all values, as well as presenting the figures for
>>> the individual links. It is not a particularly big or difficult task, but it
>>> is certainly more than the small bug corrections I just delivered. I cannot
>>> prioritize this myself right now.
>>
>> no I am not talking about the broadcast link in particular, it was only 
>> another
>> thing I noticed.
>>
>> I have a TIPC link between two ethernet ports and I send packets 
>> connectionless
>> from a client to a server running on the other side of the link.  And what I
>> still see is that the RX and TX counter are not increasing in the link
>> statistic. After sending 300 packets with a size of 10kB I see:
>>
>> Link <1.1.9:eth2-1.1.211:eth1>
>>   ACTIVE  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
>>   RX packets:6 fragments:0/0 bundles:0/0
>>   TX packets:4 fragments:0/0 bundles:0/0
>>   TX profile sample:2 packets  average:60 octets
>>   0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
>>   RX states:17978 probes:368 naks:0 defs:2 dups:2
>>   TX states:17772 probes:17386 naks:2 acks:16 dups:0
>>   Congestion link:0  Send queue max:0 avg:0
>>
>> I just wanted to know that this is a known bug or if there is something 
>> wrong in
>> my setup.
>>
>> Best regards
>> Holger
> 
> The explanation is simple: the patch is not applied on net-next yet, only on 
> net. It normally takes a few days before David re-applies fixes to net back 
> to net-next. Since you anyway checked out net-next, you could try to apply 
> the patch yourself.
> 

ok maybe my first e-mail was not clear enough. I applied your patch on top of
4.9.0-rc7 and it does not make a difference, thats what I am trying to say. It
is still broken on my side.

Best regards
Holger


--
___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] TIPC link statistic

2016-11-28 Thread Jon Maloy


> -Original Message-
> From: Holger Brunck [mailto:holger.bru...@keymile.com]
> Sent: Monday, 28 November, 2016 09:49
> To: Jon Maloy ; tipc-discussion@lists.sourceforge.net
> Subject: Re: [tipc-discussion] TIPC link statistic
> 
> On 28/11/16 15:40, Jon Maloy wrote:
> >
> >
> >> -Original Message-
> >> From: Holger Brunck [mailto:holger.bru...@keymile.com]
> >> Sent: Monday, 28 November, 2016 09:22
> >> To: Jon Maloy ; tipc-
> discuss...@lists.sourceforge.net
> >> Subject: Re: [tipc-discussion] TIPC link statistic
> >>
> >> Hi Jon,
> >>
> >> On 28/11/16 14:53, Jon Maloy wrote:
>  I saw your patch "tipc: fix link statistics counter errors". I assume it 
>  should
> > tackle this issue? I gave it a try with kernel 4.9.0-rc7 on my kmeter1 
> > board
> > which is a 32 bit powerpc board. Unfortunately the counters are still 
> > wrong
> >> in
> > the link statistic. Received packets don't appear at all and transmitted
> > packages to a remote node are accounted on the broadcast link.
> >>> I believe you are talking only about the broadcast link here? The figures 
> >>> for
> >>> broadcast reception are currently missing by design, i.e., they have 
> >>> always
> >>> been missing. We would need to scan across all broadcast reception links 
> >>> (on
> >>> the contrary, there is only one broadcast transmission link, which makes 
> >>> that
> >>> task easy) and accumulate all values, as well as presenting the figures 
> >>> for
> >>> the individual links. It is not a particularly big or difficult task, but 
> >>> it
> >>> is certainly more than the small bug corrections I just delivered. I 
> >>> cannot
> >>> prioritize this myself right now.
> >>
> >> no I am not talking about the broadcast link in particular, it was only 
> >> another
> >> thing I noticed.
> >>
> >> I have a TIPC link between two ethernet ports and I send packets
> connectionless
> >> from a client to a server running on the other side of the link.  And what 
> >> I
> >> still see is that the RX and TX counter are not increasing in the link
> >> statistic. After sending 300 packets with a size of 10kB I see:
> >>
> >> Link <1.1.9:eth2-1.1.211:eth1>
> >>   ACTIVE  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
> >>   RX packets:6 fragments:0/0 bundles:0/0
> >>   TX packets:4 fragments:0/0 bundles:0/0
> >>   TX profile sample:2 packets  average:60 octets
> >>   0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
> >>   RX states:17978 probes:368 naks:0 defs:2 dups:2
> >>   TX states:17772 probes:17386 naks:2 acks:16 dups:0
> >>   Congestion link:0  Send queue max:0 avg:0
> >>
> >> I just wanted to know that this is a known bug or if there is something 
> >> wrong
> in
> >> my setup.
> >>
> >> Best regards
> >> Holger
> >
> > The explanation is simple: the patch is not applied on net-next yet, only 
> > on net.
> It normally takes a few days before David re-applies fixes to net back to 
> net-next.
> Since you anyway checked out net-next, you could try to apply the patch
> yourself.
> >
> 
> ok maybe my first e-mail was not clear enough. I applied your patch on top of
> 4.9.0-rc7 and it does not make a difference, thats what I am trying to say. It
> is still broken on my side.
> 
> Best regards
> Holger

Then I have no more theories. The patch works fine in my x64 environment, and I 
see no reason it shouldn't work on PowerPC as well, since there are no 
endianness operations involved. Is the output *exactly* the same before and 
after having applied the patch?

///jon


--
___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] TIPC link statistic

2016-11-28 Thread Holger Brunck
On 28/11/16 15:55, Jon Maloy wrote:
>>>
>>> The explanation is simple: the patch is not applied on net-next yet, only 
>>> on net.
>> It normally takes a few days before David re-applies fixes to net back to 
>> net-next.
>> Since you anyway checked out net-next, you could try to apply the patch
>> yourself.
>>>
>>
>> ok maybe my first e-mail was not clear enough. I applied your patch on top of
>> 4.9.0-rc7 and it does not make a difference, thats what I am trying to say. 
>> It
>> is still broken on my side.
>>
>> Best regards
>> Holger
> 
> Then I have no more theories. The patch works fine in my x64 environment, and 
> I see no reason it shouldn't work on PowerPC as well, since there are no 
> endianness operations involved. Is the output *exactly* the same before and 
> after having applied the patch?
> 

hm weird, I have currently no setup were I can doublecheck this on a x86
environment.

The output differs, depending the patch is applied or not.

Sending 150 packets with a size of 1000 bytes to the server leads to the
following link statistic on the client side:

With your patch:

Link 
  Window:50 packets
  RX packets:0 fragments:0/0 bundles:0/0
  TX packets:1050 fragments:1050/150 bundles:0/0
  RX naks:0 defs:0 dups:0
  TX naks:0 acks:0 dups:0
  Congestion link:46  Send queue max:0 avg:0

Link <1.1.4:eth2-1.1.211:eth1>
  ACTIVE  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
  RX packets:5 fragments:0/0 bundles:0/0
  TX packets:4 fragments:0/0 bundles:0/0
  TX profile sample:2 packets  average:60 octets
  0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
  RX states:544 probes:10 naks:0 defs:2 dups:2
  TX states:548 probes:470 naks:2 acks:66 dups:0
  Congestion link:0  Send queue max:0 avg:0


Without your patch:

Link 
  Window:50 packets
  RX packets:0 fragments:0/0 bundles:0/0
  TX packets:0 fragments:0/0 bundles:0/0
  RX naks:0 defs:0 dups:0
  TX naks:0 acks:0 dups:0
  Congestion link:49  Send queue max:0 avg:0

Link <1.1.4:eth2-1.1.211:eth1>
  ACTIVE  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
  RX packets:0 fragments:0/0 bundles:0/0
  TX packets:4 fragments:0/0 bundles:0/0
  TX profile sample:2 packets  average:60 octets
  0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
  RX states:397 probes:7 naks:0 defs:2 dups:2
  TX states:400 probes:325 naks:2 acks:66 dups:0
  Congestion link:0  Send queue max:0 avg:0

so in both cases no TX packets are accounted on the specific link, but with your
patch something is counted on the account of the broadcast link.

Best regards
Holger

--
___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] TIPC link statistic

2016-11-28 Thread Jon Maloy


On 11/28/2016 10:32 AM, Holger Brunck wrote:
> On 28/11/16 15:55, Jon Maloy wrote:
 The explanation is simple: the patch is not applied on net-next yet, only 
 on net.
>>> It normally takes a few days before David re-applies fixes to net back to 
>>> net-next.
>>> Since you anyway checked out net-next, you could try to apply the patch
>>> yourself.
>>> ok maybe my first e-mail was not clear enough. I applied your patch on top 
>>> of
>>> 4.9.0-rc7 and it does not make a difference, thats what I am trying to say. 
>>> It
>>> is still broken on my side.
>>>
>>> Best regards
>>> Holger
>> Then I have no more theories. The patch works fine in my x64 environment, 
>> and I see no reason it shouldn't work on PowerPC as well, since there are no 
>> endianness operations involved. Is the output *exactly* the same before and 
>> after having applied the patch?
>>
> hm weird, I have currently no setup were I can doublecheck this on a x86
> environment.
>
> The output differs, depending the patch is applied or not.
>
> Sending 150 packets with a size of 1000 bytes to the server leads to the
> following link statistic on the client side:
>
> With your patch:
>
> Link 
>Window:50 packets
>RX packets:0 fragments:0/0 bundles:0/0
>TX packets:1050 fragments:1050/150 bundles:0/0
>RX naks:0 defs:0 dups:0
>TX naks:0 acks:0 dups:0
>Congestion link:46  Send queue max:0 avg:0
>
> Link <1.1.4:eth2-1.1.211:eth1>
>ACTIVE  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
>RX packets:5 fragments:0/0 bundles:0/0
>TX packets:4 fragments:0/0 bundles:0/0
>TX profile sample:2 packets  average:60 octets
>0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
>RX states:544 probes:10 naks:0 defs:2 dups:2
>TX states:548 probes:470 naks:2 acks:66 dups:0
>Congestion link:0  Send queue max:0 avg:0
>
>
> Without your patch:
>
> Link 
>Window:50 packets
>RX packets:0 fragments:0/0 bundles:0/0
>TX packets:0 fragments:0/0 bundles:0/0
>RX naks:0 defs:0 dups:0
>TX naks:0 acks:0 dups:0
>Congestion link:49  Send queue max:0 avg:0
>
> Link <1.1.4:eth2-1.1.211:eth1>
>ACTIVE  MTU:1500  Priority:10  Tolerance:1500 ms  Window:50 packets
>RX packets:0 fragments:0/0 bundles:0/0
>TX packets:4 fragments:0/0 bundles:0/0
>TX profile sample:2 packets  average:60 octets
>0-64:100% -256:0% -1024:0% -4096:0% -16384:0% -32768:0% -66000:0%
>RX states:397 probes:7 naks:0 defs:2 dups:2
>TX states:400 probes:325 naks:2 acks:66 dups:0
>Congestion link:0  Send queue max:0 avg:0
>
> so in both cases no TX packets are accounted on the specific link, but with 
> your
> patch something is counted on the account of the broadcast link.
>
> Best regards
> Holger
>
> --
> ___
> tipc-discussion mailing list
> tipc-discussion@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Yes, this is really strange. I seems like the tx packets have been 
counted as fragments in the broadcast link. My best theory is that there 
must be some mismatch between the tool and the kernel. Does both "tipc"  
and "tipc-config" give this result? Have you tried to rebuild the tool(s)?

///jon


--
___
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion


Re: [tipc-discussion] v4.7: soft lockup when releasing a socket

2016-11-28 Thread John Thompson
Hi Partha,

I tested with the latest 3 patches last night and observed no soft lockups.

Thanks,
John


On Fri, Nov 25, 2016 at 11:50 AM, John Thompson 
wrote:

> Hi Partha,
>
> I rebuilt afresh and retried the test with the same lockup kernel dumps.
> Yes I have multiple tipc clients subscribed to the topology server, at
> least 10 clients.
> They all use a subscription timeout of TIPC_WAIT_FOREVER
>
> I will try the kernel command line parameter next week.
> JT
>
>
> On Fri, Nov 25, 2016 at 3:07 AM, Parthasarathy Bhuvaragan <
> parthasarathy.bhuvara...@ericsson.com> wrote:
>
>> Hi John,
>>
>> Do you have several tipc clients subscribed to topology server?
>> What subscription timeout do they use?
>>
>> Please enable kernel command line parameter:
>> softlockup_all_cpu_backtrace=1
>>
>> /Partha
>>
>> On 11/23/2016 11:04 PM, John Thompson wrote:
>>
>>> Hi Partha,
>>>
>>> I tested overnight with the 2 patches you provided yesterday.
>>> Testing is still showing problems, here is one of the soft lockups, the
>>> other is the same as I sent the other day.
>>> I am going to redo my build as I expected some change in behaviour with
>>> your patches.
>>>
>>> It is possible that I am doing some dumping of nodes or links as I am
>>> not certain of all the code or paths.
>>> I have found that we do a tipc-config -nt and tipc-config -ls in some
>>> situations but it shouldn 't be initiated in this
>>> reboot case.
>>>
>>> <0>NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [pimd:1220]
>>> <0>NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [rpc.13:2419]
>>> <6>Modules linked in:
>>> <0>NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [AIS
>>> listener:1600]
>>> <6> tipc
>>> <6>Modules linked in:
>>> <6> jitterentropy_rng
>>> <6> tipc
>>> <6> echainiv
>>> <6> jitterentropy_rng
>>> <6> drbg
>>> <6> echainiv
>>> <6> platform_driver(O)
>>> <6> drbg
>>> <6> platform_driver(O)
>>> <6>
>>> <6>CPU: 0 PID: 2419 Comm: rpc.13 Tainted: P   O
>>> <6>CPU: 2 PID: 1600 Comm: AIS listener Tainted: P   O
>>> <6>task: aed76d20 ti: ae70c000 task.ti: ae70c000
>>> <6>task: aee3ced0 ti: ae686000 task.ti: ae686000
>>> <6>NIP: 8069257c LR: c13ebc4c CTR: 80692540
>>> <6>NIP: 80692578 LR: c13ebf50 CTR: 80692540
>>> <6>REGS: ae70dc20 TRAP: 0901   Tainted: P   O
>>> <6>REGS: ae687ad0 TRAP: 0901   Tainted: P   O
>>> <6>MSR: 00029002
>>> <6>MSR: 00029002
>>> <6><
>>> <6><
>>> <6>CE
>>> <6>CE
>>> <6>,EE
>>> <6>,EE
>>> <6>,ME
>>> <6>,ME
>>> <6>>
>>> <6>>
>>> <6>  CR: 42002484  XER: 2000
>>> <6>  CR: 48002444  XER: 
>>> <6>
>>> <6>GPR00:
>>> <6>
>>> <6>GPR00:
>>> <6>c13f3c34
>>> <6>c13ea408
>>> <6>ae70dcd0
>>> <6>ae687b80
>>> <6>aed76d20
>>> <6>aee3ced0
>>> <6>ae55c8ec
>>> <6>ae55c8ec
>>> <6>2711
>>> <6>
>>> <6>0005
>>> <6>a30e7264
>>> <6>8666592a
>>> <6>ae5e070c
>>> <6>8666592b
>>> <6>fffd
>>> <6>
>>> <6>GPR08:
>>> <6>
>>> <6>GPR08:
>>> <6>ae9dad20
>>> <6>ae72fbc8
>>> <6>0001
>>> <6>0001
>>> <6>0001
>>> <6>0001
>>> <6>
>>> <6>0004
>>> <6>80692540
>>> <6>80692540
>>> <6>
>>> <6>
>>> <6>NIP [8069257c] _raw_spin_lock_bh+0x3c/0x70
>>> <6>NIP [80692578] _raw_spin_lock_bh+0x38/0x70
>>> <6>LR [c13ebc4c] tipc_nametbl_withdraw+0x4c/0x140 [tipc]
>>> <6>LR [c13ebf50] tipc_nametbl_unsubscribe+0x50/0x120 [tipc]
>>> <6>Call Trace:
>>> <6>Call Trace:
>>> <6>[ae70dcd0] [a85d99a0] 0xa85d99a0
>>> <6>[ae687b80] [800fa258] check_object+0xc8/0x270
>>> <6> (unreliable)
>>> <6> (unreliable)
>>> <6>
>>> <6>
>>> <6>[ae70dd00] [c13f3c34] tipc_nl_node_dump_link+0x1904/0x45d0 [tipc]
>>> <6>[ae687ba0] [c13ea408] tipc_named_reinit+0xf8/0x820 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70dd30] [c13f4848] tipc_nl_node_dump_link+0x2518/0x45d0 [tipc]
>>> <6>[ae687bb0] [c13ea6c0] tipc_named_reinit+0x3b0/0x820 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70dd70] [804f29e0] sock_release+0x30/0xf0
>>> <6>[ae687bd0] [c13f7bbc] tipc_nl_publ_dump+0x50c/0xed0 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70dd80] [804f2ab4] sock_close+0x14/0x30
>>> <6>[ae687c00] [c13f865c] tipc_conn_sendmsg+0xdc/0x170 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70dd90] [80105844] __fput+0x94/0x200
>>> <6>[ae687c30] [c13eacbc] tipc_subscrp_report_overlap+0xbc/0xd0 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70ddb0] [8003dca4] task_work_run+0xd4/0x100
>>> <6>[ae687c70] [c13eb27c] tipc_topsrv_stop+0x45c/0x4f0 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70ddd0] [80023620] do_exit+0x280/0x980
>>> <6>[ae687ca0] [c13eb7a8] tipc_nametbl_remove_publ+0x58/0x110 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70de10] [80024c48] do_group_exit+0x48/0xb0
>>> <6>[ae687cd0] [c13ebc68] tipc_nametbl_withdraw+0x68/0x140 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70de30] [80030344] get_signal+0x244/0x4f0
>>> <6>[ae687d00] [c13f3c34] tipc_nl_node_dump_link+0x1904/0x45d0 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70de80] [80007734] do_signal+0x34/0x1c0
>>> <6>[ae687d30] [c13f4848] tipc_nl_node_dump_link+0x2518/0x45d0 [tipc]
>>> <6>
>>> <6>
>>> <6>[ae70df30] [800079a8] do_notify_resume+0x68/0x80
>>> <6>[ae687d70] [804f29e0] so