I trace when we yield and if we send a message to a unconnected connection in
tipc_receive_from_sock().
The traces are specified in http://pastebin.com/9YWgvrKF.
The test program I used to trigger this fault is http://pastebin.com/LNFJsnM9.
I ran it on a guest with 4 cpu's and it crashes with thi
Until now, the requests sent to topology server are queued
to a workqueue by the generic server framework.
These messages are processed by worker threads and trigger the
registered callbacks.
To reduce latency on uniprocessor systems, explicit rescheduling
is performed using cond_resched() after MA
I should be able to do more testing.
I do not know for sure that the mappings were missing or not before reboot.
If I had restarted applications, then the mappings would be there before reboot.
I do know that they are definitely missing after reboot. That is how I first
discovered it, namely by
When a link is down, it will continuously try to re-establish contact
with the peer by sending out a RESET or and ACTIVATE message at each
timeout interval. The default value for this interval is currently
375 ms. This is wasteful, and may become a problem in very large
clusters with dozens or hund
When running TIPC in large clusters we experience behavior that
may potentially become problematic in the future. This series
picks some low-hanging fruit in this regard, and also fixes a
couple of other minor issues.
Jon Maloy (3):
tipc: eliminate buffer leak in bearer layer
tipc: stricter fi
When enabling a bearer we create a 'neigbor discoverer' instance by
calling the function tipc_disc_create() before the bearer is actually
registered in the list of enabled bearers. Because of this, the very
first discovery broadcast message, created by the mentioned function,
is lost, since it cann
Resetting a bearer/interface, with the consequence of resetting all its
pertaining links, is not an atomic action. This becomes particularly
evident in very large clusters, where a lot of traffic may happen on the
remaining links while we are busy shutting them down. In extreme cases,
we may even s
> -Original Message-
> From: Rune Torgersen [mailto:ru...@innovsys.com]
> Sent: Tuesday, 05 April, 2016 12:12
> To: Jon Maloy; 'Jon Maloy'; tipc-discussion@lists.sourceforge.net
> Cc: erik.hu...@gmail.com; Richard Alpe; Parthasarathy Bhuvaragan; Xue Ying
> (ying.x...@gmail.com); Ying Xue
As a complement to the previous commit, we must also disallow received
regular traffic messages to change the state of the link endpoint.
Such state change can now only happen upon reception on ACTIVATE
or STATE messages, in the unlikely worst case after several attempts.
This change is fully back
Since each endpoint link is sending out a RESET message when it is
going down, it is possible for it to come back up after reboot/interface
cycling by just receiving an ACTIVATE message and continuing by sending
regular traffic. However, after such an event, the local link endpoint
may have been re
In some link establishment scenarios we see that packet #2 may be sent
out before packet #1, forcing the receiver to demand retransmission of
the missing packet. This is harmless, but may cause confusion among
people tracing the packet flow.
Since this is extremely easy to fix, we do so by adding
We fix a couple of minor issues regarding link establishing.
Jon Maloy (3):
tipc: ensure that first packets on link are sent in order
tipc: guarantee peer bearer id exchange when links go up
tipc: don't allow regular traffic messages to establish link
net/tipc/link.c | 12 ++--
net
When enabling a bearer we create a 'neigbor discoverer' instance by
calling the function tipc_disc_create() before the bearer is actually
registered in the list of enabled bearers. Because of this, the very
first discovery broadcast message, created by the mentioned function,
is lost, since it cann
When a link is down, it will continuously try to re-establish contact
with the peer by sending out a RESET or an ACTIVATE message at each
timeout interval. The default value for this interval is currently
375 ms. This is wasteful, and may become a problem in very large
clusters with dozens or hundr
When running TIPC in large clusters we experience behavior that
may potentially become problematic in the future. This series
picks some low-hanging fruit in this regard, and also fixes a
couple of other minor issues.
v2: Corrected typos in commit #3, as per feedback from S. Shtylyov
Jon Maloy (3
Resetting a bearer/interface, with the consequence of resetting all its
pertaining links, is not an atomic action. This becomes particularly
evident in very large clusters, where a lot of traffic may happen on the
remaining links while we are busy shutting them down. In extreme cases,
we may even s
16 matches
Mail list logo