On 04/22/2016 05:25 PM, Jon Maloy wrote:
>
>> -----Original Message-----
>> From: Parthasarathy Bhuvaragan
>> Sent: Friday, 22 April, 2016 09:01
>> To: Jon Maloy; tipc-discussion@lists.sourceforge.net; Ying Xue; Richard Alpe
>> Cc: ma...@donjonn.com
>> Subject: Re: [PATCH net-next v2 1/1] tipc: add neighbor monitoring framework
>>
>>
>>
>> On 04/20/2016 03:24 PM, Jon Maloy wrote:
>>>> -----Original Message-----
>>>> From: Parthasarathy Bhuvaragan
>>>> Sent: Tuesday, 19 April, 2016 09:53
>>>> To: Jon Maloy; tipc-discussion@lists.sourceforge.net; Ying Xue; Richard 
>>>> Alpe
>>>> Cc: ma...@donjonn.com
>>>> Subject: Re: [PATCH net-next v2 1/1] tipc: add neighbor monitoring
>> framework
>>>>
>>>>
>>>> On 04/18/2016 04:20 PM, Jon Maloy wrote:
>>>>>> -----Original Message-----
>>>>>> From: Parthasarathy Bhuvaragan
>>>>>> Sent: Monday, 18 April, 2016 09:30
>>>>>> To: Jon Maloy; tipc-discussion@lists.sourceforge.net; Ying Xue; Richard 
>>>>>> Alpe
>>>>>> Cc: ma...@donjonn.com
>>>>>> Subject: Re: [PATCH net-next v2 1/1] tipc: add neighbor monitoring
>>>> framework
>>>>>> Hi Jon,
>>>>>>
>>>>> [...]
>>>>>
>>>>>>> +       u16 dom_gen;
>>>>>>> +       bool disabled;
>>>>>> mon_breakpoint is defined in struct tipc_net, whereas the configuration
>>>>>> option disabled is defined in struct tipc_monitor and tied to
>>>>>> a bearer.
>>>>>> I think this should belongs to struct tipc_net, as all the bearers have
>>>>>> the same supervision algorithm.
>>>>> No.  Each monitor list belongs to a bearer, and is disabled (and 
>>>>> re-enabled)
>>>> along with that bearer.
>>>>> It is fully normal to disable one bearer and monitor list, while leaving 
>>>>> the
>> other
>>>> one active.
>>>> [...]
>>>> I understand that the monitor list belongs to the bearer and is in sync 
>>>> with
>>>> the state of the bearer.
>>>> Should this setting be user configurable?
>>>> Is it allowed for the monitor state to be out of sync w.r.t bearer state?
>>> No, it isn't, and I don't think that can happen with the current 
>>> implementation.
>>> That's why I added the ability to disable the monitor for a given bearer.
>> Can you clarify the following for me:
>> new model used in monitor:
>> tipc_net->monitor[bearers]->peers/nodes.
>>
>> whereas for links:
>> tipc_net->node_list->links[bearers].
>>
>> Why did you choose to use the former model?
> I assume you mean I could have something like 
> monitor->node->peers[bearer_id], and have only one monitor instance?
> There are several reasons.  
> First, I found it much simpler to implement by keeping the monitor instances 
> completely separate, since I don't need to consider the case that the two 
> planes see different nodes in the algorthms.
> If a node is visible on one plane, and not the other, there would be a 
> NULL-pointer in that entry (or a "disabled" boolean or whatever). This hole 
> in the sequence would be difficult to handle; at least I would not be able to 
> use the very memory efficient and algorithmic simple bitmaps I have now (the 
> "up_map" and the "head_map"), and I don't even want to think about what the 
> algorithm would look like to handle such a structure.
> Second, it is more memory efficient, which counts when we are talking about 
> hundreds of nodes. We need only one pointer per monitor instance, and only an 
> instance and its peer list when it is really needed. Most users still use 
> only one bearer, which often makes more sense in a cloud environment.
> Third, this list is circular, not linear as in the node list case. Contrary 
> to the node list, the peer list has no "root" struct that is not a node/peer, 
> only a "self" pointer to be used as starting point, and sometimes traversal 
> points, for iterations. It feels natural to have this pointer (of unknown 
> type outside monitor.c) in struct monitor rather than in struct tipc_net.
> I can even think of more good reasons, but I think this is sufficient to make 
> my point.
Thanks for the detailed explanation.
>
>> I understand that since monitor is always configured after we iterate the
>> node_list
>> and derive the bearers, this structure is efficient to avoid extra lookups.
>> But are there any additional advantages?
>>
>> I wanted to get a consistent configuration flow and now decided to let node
>> terminate
>> the netlink requests and call helpers provided by monitor to respond. This 
>> flow is
>> similar to how link configurations are terminated at node, which uses helpers
>> from
>> link to respond.
>>
>> After discussing with Richard, we conclude to keep monitor under link and 
>> treat it
>> as a link feature rather than have it as separate sub-command.
>> From the user's perspective the ring monitoring is an operation performed for
>> link
>> supervision. So we place it under link as:
>> tipc link monitor set <min_activation_threshold>
>> tipc link monitor list
> I was also hesitant about having it as a separate sub-command, but was rather 
> thinking adding it under "bearer".  But this is also ok.
> I assume the latter command would be:
> tipc link monitor [bearer_id] list ?
The command will be:
tipc link monitor list [media MEDIA device DEVICE]
Consistent with existing link commands, which refer to bearers as:
tipc bearer get tolerance media eth device data0

regards
Partha
>
> ///jon
>
>> regards
>> Partha
>>>
>>>> I know that from a protocol prespective this implementation is fully 
>>>> backward
>>>> compatible, but is this a valid use-case?
>>>>
>>>> For monitoring list which attributes are of interest?
>>> All of them.  This is what my debug printout on node 5 in for a 33-node 
>>> system
>> would look like.
>>> Apr 20 09:15:12 xenial1 kernel: [77635.916755]
>> ============================================
>>> Apr 20 09:15:12 xenial1 kernel: [77635.920025] Monitor list for 1001005/0: 
>>> 33
>> peers, ngen/dgen 64/7
>>> Apr 20 09:15:12 xenial1 kernel: [77635.920859]  5/5: up 1, hd 1, lc 0, mbrs 
>>> 2/2,
>> cmf 0, hdm 0
>>> Apr 20 09:15:12 xenial1 kernel: [77635.924025]        DOMAIN: mbrs: 2, dgen 
>>> 7
>> [1001006,1] [1001007,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.925275]     6/6: up 1, hd 0, lc 1, 
>>> mbrs 2/2,
>> cmf 0, hdm 1
>>> Apr 20 09:15:12 xenial1 kernel: [77635.928027]        DOMAIN: mbrs: 2, dgen 
>>> 8
>> [1001007,1] [1001008,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.929129]     7/7: up 1, hd 0, lc 1, 
>>> mbrs 3/3,
>> cmf 0, hdm 3
>>> Apr 20 09:15:12 xenial1 kernel: [77635.932025]        DOMAIN: mbrs: 3, dgen 
>>> 10
>> [1001008,1] [1001009,1] [100100a,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.933435]   8/8: up 1, hd 1, lc 0, 
>>> mbrs 3/3,
>> cmf 0, hdm 3
>>> Apr 20 09:15:12 xenial1 kernel: [77635.936030]        DOMAIN: mbrs: 3, dgen 
>>> 11
>> [1001009,1] [100100a,1] [100100b,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.937793]     9/9: up 1, hd 0, lc 0, 
>>> mbrs 3/3,
>> cmf 0, hdm 3
>>> Apr 20 09:15:12 xenial1 kernel: [77635.940020]        DOMAIN: mbrs: 3, dgen 
>>> 12
>> [100100a,1] [100100b,1] [100100c,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.944019]     10/a: up 1, hd 0, lc 0, 
>>> mbrs 3/3,
>> cmf 0, hdm 7
>>> Apr 20 09:15:12 xenial1 kernel: [77635.944420]        DOMAIN: mbrs: 3, dgen 
>>> 13
>> [100100b,1] [100100c,1] [100100d,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.948033]     11/b: up 1, hd 0, lc 0, 
>>> mbrs 3/3,
>> cmf 0, hdm 7
>>> Apr 20 09:15:12 xenial1 kernel: [77635.948512]        DOMAIN: mbrs: 3, dgen 
>>> 14
>> [100100c,1] [100100d,1] [100100e,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.952031]   12/c: up 1, hd 1, lc 0, 
>>> mbrs 3/3,
>> cmf 0, hdm 7
>>> Apr 20 09:15:12 xenial1 kernel: [77635.952725]        DOMAIN: mbrs: 3, dgen 
>>> 15
>> [100100d,1] [100100e,1] [100100f,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.956026]     13/d: up 1, hd 0, lc 0, 
>>> mbrs 4/4,
>> cmf 0, hdm 7
>>> Apr 20 09:15:12 xenial1 kernel: [77635.956826]        DOMAIN: mbrs: 4, dgen 
>>> 17
>> [100100e,1] [100100f,1] [1001010,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.960027]        [1001011,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.960867]     14/e: up 1, hd 0, lc 0, 
>>> mbrs 4/4,
>> cmf 0, hdm 7
>>> Apr 20 09:15:12 xenial1 kernel: [77635.964026]        DOMAIN: mbrs: 4, dgen 
>>> 18
>> [100100f,1] [1001010,1] [1001011,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.964332]        [1001012,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.966473]     15/f: up 1, hd 0, lc 0, 
>>> mbrs 4/4,
>> cmf 0, hdm 7
>>> Apr 20 09:15:12 xenial1 kernel: [77635.968035]        DOMAIN: mbrs: 4, dgen 
>>> 19
>> [1001010,1] [1001011,1] [1001012,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.972021]        [1001013,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.972119]   16/10: up 1, hd 1, lc 0, 
>>> mbrs 4/4,
>> cmf 0, hdm 7
>>> Apr 20 09:15:12 xenial1 kernel: [77635.974605]        DOMAIN: mbrs: 4, dgen 
>>> 20
>> [1001011,1] [1001012,1] [1001013,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.976047]        [1001014,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.977867]     17/11: up 1, hd 0, lc 0, 
>>> mbrs 4/4,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77635.980020]        DOMAIN: mbrs: 4, dgen 
>>> 21
>> [1001012,1] [1001013,1] [1001014,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.981359]        [1001015,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.984022]     18/12: up 1, hd 0, lc 0, 
>>> mbrs 4/4,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77635.984851]        DOMAIN: mbrs: 4, dgen 
>>> 22
>> [1001013,1] [1001014,1] [1001015,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.988020]        [1001016,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.988848]     19/13: up 1, hd 0, lc 0, 
>>> mbrs 4/4,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77635.992023]        DOMAIN: mbrs: 4, dgen 
>>> 23
>> [1001014,1] [1001015,1] [1001016,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.992312]        [1001017,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.994359]     20/14: up 1, hd 0, lc 0, 
>>> mbrs 4/4,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77635.996015]        DOMAIN: mbrs: 4, dgen 
>>> 24
>> [1001015,1] [1001016,1] [1001017,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77635.997785]        [1001018,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.000707]   21/15: up 1, hd 1, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.001417]        DOMAIN: mbrs: 5, dgen 
>>> 26
>> [1001016,1] [1001017,1] [1001018,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.004058]        [1001019,1] 
>>> [100101a,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.005586]     22/16: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.008019]        DOMAIN: mbrs: 5, dgen 
>>> 27
>> [1001017,1] [1001018,1] [1001019,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.009443]        [100101a,1] 
>>> [100101b,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.012046]     23/17: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.013243]        DOMAIN: mbrs: 5, dgen 
>>> 28
>> [1001018,1] [1001019,1] [100101a,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.016021]        [100101b,1] 
>>> [100101c,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.017392]     24/18: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.020020]        DOMAIN: mbrs: 5, dgen 
>>> 29
>> [1001019,1] [100101a,1] [100101b,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.021024]        [100101c,1] 
>>> [100101d,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.024019]     25/19: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.024896]        DOMAIN: mbrs: 5, dgen 
>>> 30
>> [100101a,1] [100101b,1] [100101c,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.028038]        [100101d,1] 
>>> [100101e,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.029077]     26/1a: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.032036]        DOMAIN: mbrs: 5, dgen 
>>> 31
>> [100101b,1] [100101c,1] [100101d,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.032787]        [100101e,1] 
>>> [100101f,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.036025]   27/1b: up 1, hd 1, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.036635]        DOMAIN: mbrs: 5, dgen 
>>> 32
>> [100101c,1] [100101d,1] [100101e,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.040042]        [100101f,1] 
>>> [1001020,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.040809]     28/1c: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.044019]        DOMAIN: mbrs: 5, dgen 
>>> 32
>> [100101d,1] [100101e,1] [100101f,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.044431]        [1001020,1] 
>>> [1001021,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.048025]     29/1d: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.048240]        DOMAIN: mbrs: 5, dgen 
>>> 32
>> [100101e,1] [100101f,1] [1001020,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.052033]        [1001021,1] 
>>> [1001001,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.052294]     30/1e: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.053967]        DOMAIN: mbrs: 5, dgen 
>>> 32
>> [100101f,1] [1001020,1] [1001021,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.056034]        [1001001,1] 
>>> [1001002,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.058145]     31/1f: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.060033]        DOMAIN: mbrs: 5, dgen 
>>> 32
>> [1001020,1] [1001021,1] [1001001,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.061804]        [1001002,1] 
>>> [1001003,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.064017]     32/20: up 1, hd 0, lc 0, 
>>> mbrs 5/5,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.065687]        DOMAIN: mbrs: 5, dgen 
>>> 32
>> [1001021,1] [1001001,1] [1001002,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.068023]        [1001003,1] 
>>> [1001004,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.069756]   33/21: up 1, hd 1, lc 0, 
>>> mbrs 5/4,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.072015]        DOMAIN: mbrs: 5, dgen 
>>> 32
>> [1001001,1] [1001002,1] [1001003,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.073427]        [1001004,1] 
>>> [1001005,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.076014]     1/1: up 1, hd 0, lc 0, 
>>> mbrs 1/1,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.077442]        DOMAIN: mbrs: 1, dgen 
>>> 2
>> [1001002,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.080038]     2/2: up 1, hd 0, lc 0, 
>>> mbrs 1/1,
>> cmf 0, hdm 1f
>>> Apr 20 09:15:12 xenial1 kernel: [77636.081203]        DOMAIN: mbrs: 1, dgen 
>>> 3
>> [1001003,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.084015]     3/3: up 1, hd 0, lc 0, 
>>> mbrs 2/1,
>> cmf 0, hdm 1d
>>> Apr 20 09:15:12 xenial1 kernel: [77636.084936]        DOMAIN: mbrs: 2, dgen 
>>> 5
>> [1001004,1] [1001005,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.088019]     4/4: up 1, hd 0, lc 0, 
>>> mbrs 2/0,
>> cmf 0, hdm 19
>>> Apr 20 09:15:12 xenial1 kernel: [77636.088814]        DOMAIN: mbrs: 2, dgen 
>>> 6
>> [1001005,1] [1001006,1]
>>> Apr 20 09:15:12 xenial1 kernel: [77636.092041]
>> ============================================
>>> You can certainly find more compact formats for the flags, and possibly make
>> the domain contents optional with a switch, but it should be possible to 
>> fully
>> verify the correctness of the list in case of doubt.
>>>> The attributes are spreadout between link, monitor and some cached.
>>>> My idea was to do like:
>>>> tipc monitor list [bearer <name>]
>>> Yes, that looks good.
>>>
>>> ///jon
>>>
>>>> regards
>>>> Partha
>>>>> ///jon
>>>>>
>>>>>
>>>>>>> +       struct net *net;
>>>>>>> +};
>>>>>>> +
>>>>>>> +static struct tipc_monitor *tipc_monitor(struct net *net, int 
>>>>>>> bearer_id)
>>>>>>> +{
>>>>>>> +       return tipc_net(net)->monitors[bearer_id];
>>>>>>> +}
>>>>>>> +
>>>>>>> +const int tipc_max_domain_size = sizeof(struct tipc_mon_domain);
>>>>>>> +
>>>>>>> +/* dom_rec_len(): actual size of domain record for transport
>>>>>>> + */
>>>>>>> +static int dom_rec_len(struct tipc_mon_domain *dom, u16 mcnt)
>>>>>>> +{
>>>>>>> +       return ((void *)&dom->members - (void *)dom) + (mcnt *
>> sizeof(u32));
>>>>>>> +}
>>>>>>> +
>>>>>>> +/* dom_size() : calculate size of own domain based on number of peers
>>>>>>> + */
>>>>>>> +static int dom_size(int peers)
>>>>>>> +{
>>>>>>> +       int i = 0;
>>>>>>> +
>>>>>>> +       while ((i * i) < peers)
>>>>>>> +               i++;
>>>>>>> +       return i < TIPC_MAX_MON_DOMAIN ? i :
>> TIPC_MAX_MON_DOMAIN;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void map_set(u64 *up_map, int i, unsigned int v)
>>>>>>> +{
>>>>>>> +       *up_map &= ~(1 << i);
>>>>>>> +       *up_map |= (v << i);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static int map_get(u64 up_map, int i)
>>>>>>> +{
>>>>>>> +       return (up_map & (1 << i)) >> i;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static struct tipc_peer *peer_prev(struct tipc_peer *peer)
>>>>>>> +{
>>>>>>> +       return list_last_entry(&peer->list, struct tipc_peer, list);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static struct tipc_peer *peer_nxt(struct tipc_peer *peer)
>>>>>>> +{
>>>>>>> +       return list_first_entry(&peer->list, struct tipc_peer, list);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static struct tipc_peer *peer_head(struct tipc_peer *peer)
>>>>>>> +{
>>>>>>> +       while (!peer->is_head)
>>>>>>> +               peer = peer_prev(peer);
>>>>>>> +       return peer;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static struct tipc_peer *get_peer(struct tipc_monitor *mon, u32 addr)
>>>>>>> +{
>>>>>>> +       struct tipc_peer *peer;
>>>>>>> +       unsigned int thash = tipc_hashfn(addr);
>>>>>>> +
>>>>>>> +       hlist_for_each_entry(peer, &mon->peers[thash], hash) {
>>>>>>> +               if (peer->addr == addr)
>>>>>>> +                       return peer;
>>>>>>> +       }
>>>>>>> +       return NULL;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static struct tipc_peer *get_self(struct net *net, int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
>>>>>>> +
>>>>>>> +       return mon->self;
>>>>>>> +}
>>>>>>> +
>>>>>>> +/* mon_match_domain() : match a peer's domain record against monitor
>>>> list
>>>>>>> + */
>>>>>>> +static void mon_match_domain(struct tipc_monitor *mon,
>>>>>>> +                            struct tipc_peer *peer)
>>>>>>> +{
>>>>>>> +       struct tipc_mon_domain *dom = peer->domain;
>>>>>>> +       struct tipc_peer *member;
>>>>>>> +       u64 prev_map;
>>>>>>> +       u32 addr;
>>>>>>> +       int up, i;
>>>>>>> +
>>>>>>> +       if (!dom || !peer->is_up)
>>>>>>> +               return;
>>>>>>> +
>>>>>>> +       /* Scan across domain members and match against monitor list */
>>>>>>> +       peer->monitoring = 0;
>>>>>>> +       member = peer_nxt(peer);
>>>>>>> +       for (i = 0; i < dom->member_cnt; i++) {
>>>>>>> +               addr = dom->members[i];
>>>>>>> +               if (addr != member->addr)
>>>>>>> +                       return;
>>>>>>> +               if (addr == tipc_own_addr(mon->net))
>>>>>>> +                       return;
>>>>>>> +               peer->monitoring++;
>>>>>>> +               prev_map = member->head_map;
>>>>>>> +
>>>>>>> +               /* Set peer's up/down status for this member in its head
>> map */
>>>>>>> +               up = map_get(dom->up_map, i);
>>>>>>> +               map_set(&member->head_map, i, up);
>>>>>>> +
>>>>>>> +               /* Start confirmation probing if status went up -> down
>> */
>>>>>>> +               if (member->is_up && !up && (member->head_map !=
>>>>>> prev_map))
>>>>>>> +                       member->confirm = true;
>>>>>>> +               member = peer_nxt(member);
>>>>>>> +       }
>>>>>>> +}
>>>>>>> +
>>>>>>> +/* mon_update_local_domain() : update after peer
>>>>>> addition/removal/up/down
>>>>>>> + */
>>>>>>> +static void mon_update_local_domain(struct tipc_monitor *mon)
>>>>>>> +{
>>>>>>> +       struct tipc_peer *self = mon->self;
>>>>>>> +       struct tipc_mon_domain *cache = &mon->cache;
>>>>>>> +       struct tipc_mon_domain *dom = self->domain;
>>>>>>> +       struct tipc_peer *peer = self;
>>>>>>> +       int member_cnt, i;
>>>>>>> +
>>>>>>> +       /* Update local domain size based on current size of cluster */
>>>>>>> +       member_cnt = dom_size(mon->peer_cnt) - 1;
>>>>>>> +       self->monitoring = member_cnt;
>>>>>>> +
>>>>>>> +       /* Update native and cached outgoing local domain records */
>>>>>>> +       dom->len = dom_rec_len(dom, member_cnt);
>>>>>>> +       dom->gen = ++mon->dom_gen;
>>>>>>> +       dom->member_cnt = member_cnt;
>>>>>>> +       for (i = 0; i < member_cnt; i++) {
>>>>>>> +               peer = peer_nxt(peer);
>>>>>>> +               dom->members[i] = peer->addr;
>>>>>>> +               map_set(&dom->up_map, i, peer->is_up);
>>>>>>> +               cache->members[i] = htonl(peer->addr);
>>>>>>> +       }
>>>>>>> +       cache->len = htons(dom->len);
>>>>>>> +       cache->gen = htons(dom->gen);
>>>>>>> +       cache->member_cnt = htons(member_cnt);
>>>>>>> +       cache->up_map = cpu_to_be64(dom->up_map);
>>>>>>> +       mon_match_domain(mon, self);
>>>>>>> +}
>>>>>>> +
>>>>>>> +/* mon_update_neighbors() : update neighbors around an
>>>> added/removed
>>>>>> peer
>>>>>>> + */
>>>>>>> +static void mon_update_neighbors(struct tipc_monitor *mon,
>>>>>>> +                                struct tipc_peer *peer)
>>>>>>> +{
>>>>>>> +       int dz, i;
>>>>>>> +
>>>>>>> +       dz = dom_size(mon->peer_cnt);
>>>>>>> +       for (i = 0; i < dz; i++) {
>>>>>>> +               peer->head_map = 0;
>>>>>>> +               peer = peer_nxt(peer);
>>>>>>> +       }
>>>>>>> +       for (i = 0; i < (dz * 2); i++) {
>>>>>>> +               mon_match_domain(mon, peer);
>>>>>>> +               peer = peer_prev(peer);
>>>>>>> +       }
>>>>>>> +}
>>>>>>> +
>>>>>>> +/* mon_assign_roles() : reassign peer roles after a network change
>>>>>>> + * The monitor list is consistent at this stage; i.e., each peer is 
>>>>>>> monitoring
>>>>>>> + * a set of domain members as matched beween domain record and the
>>>>>> monitor list
>>>>>>> + */
>>>>>>> +static void mon_assign_roles(struct tipc_monitor *mon, struct tipc_peer
>>>>>> *head)
>>>>>>> +{
>>>>>>> +       struct tipc_peer *peer = peer_nxt(head);
>>>>>>> +       struct tipc_peer *self = mon->self;
>>>>>>> +       int i = 0;
>>>>>>> +
>>>>>>> +       for (; peer != self; peer = peer_nxt(peer)) {
>>>>>>> +               peer->is_local = false;
>>>>>>> +
>>>>>>> +               /* Update domain member */
>>>>>>> +               if (i++ < head->monitoring) {
>>>>>>> +                       peer->is_head = false;
>>>>>>> +                       if (head == self)
>>>>>>> +                               peer->is_local = true;
>>>>>>> +                       continue;
>>>>>>> +               }
>>>>>>> +               /* Assign next domain head */
>>>>>>> +               if (!peer->is_up)
>>>>>>> +                       continue;
>>>>>>> +               if (peer->is_head)
>>>>>>> +                       break;
>>>>>>> +               head = peer;
>>>>>>> +               head->is_head = true;
>>>>>>> +               i = 0;
>>>>>>> +       }
>>>>>>> +       mon->list_gen++;
>>>>>>> +}
>>>>>>> +
>>>>>>> +void tipc_mon_remove_peer(struct net *net, u32 addr, int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
>>>>>>> +       struct tipc_peer *self = get_self(net, bearer_id);
>>>>>>> +       struct tipc_peer *peer, *prev, *head;
>>>>>>> +
>>>>>>> +       write_lock_bh(&mon->lock);
>>>>>>> +       peer = get_peer(mon, addr);
>>>>>>> +       if (!peer)
>>>>>>> +               goto exit;
>>>>>>> +       prev = peer_prev(peer);
>>>>>>> +       list_del(&peer->list);
>>>>>>> +       hlist_del(&peer->hash);
>>>>>>> +       kfree(peer->domain);
>>>>>>> +       kfree(peer);
>>>>>>> +       mon->peer_cnt--;
>>>>>>> +       head = peer_head(prev);
>>>>>>> +       if (head == self)
>>>>>>> +               mon_update_local_domain(mon);
>>>>>>> +       mon_update_neighbors(mon, prev);
>>>>>>> +       mon_assign_roles(mon, head);
>>>>>>> +exit:
>>>>>>> +       write_unlock_bh(&mon->lock);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static bool tipc_mon_add_peer(struct tipc_monitor *mon, u32 addr,
>>>>>>> +                             struct tipc_peer **peer)
>>>>>>> +{
>>>>>>> +       struct tipc_peer *self = mon->self;
>>>>>>> +       struct tipc_peer *cur, *prev, *p;
>>>>>>> +
>>>>>>> +       p = kzalloc(sizeof(*p), GFP_ATOMIC);
>>>>>>> +       *peer = p;
>>>>>>> +       if (!p)
>>>>>>> +               return false;
>>>>>>> +       p->addr = addr;
>>>>>>> +
>>>>>>> +       /* Add new peer to lookup list */
>>>>>>> +       INIT_LIST_HEAD(&p->list);
>>>>>>> +       hlist_add_head(&p->hash, &mon->peers[tipc_hashfn(addr)]);
>>>>>>> +
>>>>>>> +       /* Sort new peer into iterator list, in ascending circular 
>>>>>>> order */
>>>>>>> +       prev = self;
>>>>>>> +       list_for_each_entry(cur, &self->list, list) {
>>>>>>> +               if ((addr > prev->addr) && (addr < cur->addr))
>>>>>>> +                       break;
>>>>>>> +               if (((addr < cur->addr) || (addr > prev->addr)) &&
>>>>>>> +                   (prev->addr > cur->addr))
>>>>>>> +                       break;
>>>>>>> +               prev = cur;
>>>>>>> +       }
>>>>>>> +       list_add_tail(&p->list, &cur->list);
>>>>>>> +       mon->peer_cnt++;
>>>>>>> +       mon_update_neighbors(mon, p);
>>>>>>> +       return true;
>>>>>>> +}
>>>>>>> +
>>>>>>> +void tipc_mon_peer_up(struct net *net, u32 addr, int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
>>>>>>> +       struct tipc_peer *self = get_self(net, bearer_id);
>>>>>>> +       struct tipc_peer *peer, *head;
>>>>>>> +
>>>>>>> +       write_lock_bh(&mon->lock);
>>>>>>> +       peer = get_peer(mon, addr);
>>>>>>> +       if (!peer && !tipc_mon_add_peer(mon, addr, &peer))
>>>>>>> +               goto exit;
>>>>>>> +       peer->is_up = true;
>>>>>>> +       head = peer_head(peer);
>>>>>>> +       if (head == self)
>>>>>>> +               mon_update_local_domain(mon);
>>>>>>> +       mon_assign_roles(mon, head);
>>>>>>> +exit:
>>>>>>> +       write_unlock_bh(&mon->lock);
>>>>>>> +}
>>>>>>> +
>>>>>>> +void tipc_mon_peer_down(struct net *net, u32 addr, int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
>>>>>>> +       struct tipc_peer *self = get_self(net, bearer_id);
>>>>>>> +       struct tipc_peer *peer, *member, *head;
>>>>>>> +       int i = 0;
>>>>>>> +
>>>>>>> +       write_lock_bh(&mon->lock);
>>>>>>> +       peer = get_peer(mon, addr);
>>>>>>> +       if (!peer) {
>>>>>>> +               pr_warn("Mon: unknown link %x/%u DOWN\n", addr,
>>>>>> bearer_id);
>>>>>>> +               goto exit;
>>>>>>> +       }
>>>>>>> +       /* Update domain members' head_map field */
>>>>>>> +       if (peer->domain) {
>>>>>>> +               peer->domain->up_map = 0;
>>>>>>> +               mon_match_domain(mon, peer);
>>>>>>> +       }
>>>>>>> +       /* Suppress member probing if peer was not domain head */
>>>>>>> +       member = peer_nxt(peer);
>>>>>>> +       while (!peer->is_head && (i++ < peer->monitoring)) {
>>>>>>> +               member->confirm = false;
>>>>>>> +               member = peer_nxt(member);
>>>>>>> +       }
>>>>>>> +       peer->is_up = false;
>>>>>>> +       peer->is_head = false;
>>>>>>> +       peer->is_local = false;
>>>>>>> +       peer->confirm = false;
>>>>>>> +       peer->monitoring = 0;
>>>>>>> +       kfree(peer->domain);
>>>>>>> +       peer->domain = NULL;
>>>>>>> +       head = peer_head(peer);
>>>>>>> +       if (head == self)
>>>>>>> +               mon_update_local_domain(mon);
>>>>>>> +       mon_assign_roles(mon, head);
>>>>>>> +exit:
>>>>>>> +       write_unlock_bh(&mon->lock);
>>>>>>> +}
>>>>>>> +
>>>>>>> +/* tipc_mon_rcv - process monitor domain event message
>>>>>>> + */
>>>>>>> +void tipc_mon_rcv(struct net *net, void *data, u16 dlen, u32 addr,
>>>>>>> +                 struct tipc_mon_state *state, int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
>>>>>>> +       struct tipc_mon_domain *ndom = data;
>>>>>>> +       u16 nmember_cnt = ntohs(ndom->member_cnt);
>>>>>>> +       int ndlen = dom_rec_len(ndom, nmember_cnt);
>>>>>>> +       u16 ndgen = ntohs(ndom->gen);
>>>>>>> +       struct tipc_mon_domain *dom;
>>>>>>> +       struct tipc_peer *peer;
>>>>>>> +       int i;
>>>>>>> +
>>>>>>> +       if (!dlen)
>>>>>>> +               return;
>>>>>>> +
>>>>>>> +       if ((dlen != ntohs(ndom->len)) || (dlen != ndlen)) {
>>>>>>> +               pr_warn_ratelimited("Received illegal domain record");
>>>>>>> +               return;
>>>>>>> +       }
>>>>>>> +       state->ack_gen = ntohs(ndom->ack_gen);
>>>>>>> +
>>>>>>> +       /* Ignore if this generation already received */
>>>>>>> +       if (!more(ndgen, state->peer_gen) && !state->probed)
>>>>>>> +               return;
>>>>>>> +       state->probed = 0;
>>>>>>> +
>>>>>>> +       write_lock_bh(&mon->lock);
>>>>>>> +       peer = get_peer(mon, addr);
>>>>>>> +       if (!peer)
>>>>>>> +               goto exit;
>>>>>>> +       if (!more(ndgen, state->peer_gen))
>>>>>>> +               goto exit;
>>>>>>> +       state->peer_gen = ndgen;
>>>>>>> +       if (!peer->is_up)
>>>>>>> +               goto exit;
>>>>>>> +
>>>>>>> +       /* Transform and store received domain record */
>>>>>>> +       dom = peer->domain;
>>>>>>> +       if (!dom || (dom->len < ndlen)) {
>>>>>>> +               kfree(dom);
>>>>>>> +               dom = kmalloc(ndlen, GFP_ATOMIC);
>>>>>>> +               peer->domain = dom;
>>>>>>> +               if (!dom)
>>>>>>> +                       goto exit;
>>>>>>> +       }
>>>>>>> +       dom->len = ndlen;
>>>>>>> +       dom->gen = ndgen;
>>>>>>> +       dom->member_cnt = nmember_cnt;
>>>>>>> +       dom->up_map = be64_to_cpu(ndom->up_map);
>>>>>>> +       for (i = 0; i < nmember_cnt; i++)
>>>>>>> +               dom->members[i] = ntohl(ndom->members[i]);
>>>>>>> +
>>>>>>> +       /* Update peers affected by this domain record */
>>>>>>> +       mon_match_domain(mon, peer);
>>>>>>> +       peer->confirm = 0;
>>>>>>> +       mon_assign_roles(mon, peer_head(peer));
>>>>>>> +exit:
>>>>>>> +       write_unlock_bh(&mon->lock);
>>>>>>> +}
>>>>>>> +
>>>>>>> +void tipc_mon_prep(struct net *net, void *data, int *dlen,
>>>>>>> +                  struct tipc_mon_state *state, int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
>>>>>>> +       struct tipc_mon_domain *dom = data;
>>>>>>> +       u16 gen = state->gen;
>>>>>>> +
>>>>>>> +       if (mon->peer_cnt < tipc_net(net)->mon_breakpoint) {
>>>>>>> +               *dlen = 0;
>>>>>>> +               return;
>>>>>>> +       }
>>>>>>> +       if (!less(state->ack_gen, gen) || mon->disabled) {
>>>>>>> +               *dlen = dom_rec_len(dom, 0);
>>>>>>> +               dom->len = htons(dom_rec_len(dom, 0));
>>>>>>> +               dom->gen = htons(gen);
>>>>>>> +               dom->ack_gen = htons(state->peer_gen);
>>>>>>> +               dom->member_cnt = 0;
>>>>>>> +               return;
>>>>>>> +       }
>>>>>>> +       read_lock_bh(&mon->lock);
>>>>>>> +       *dlen = ntohs(mon->cache.len);
>>>>>>> +       memcpy(data, &mon->cache, *dlen);
>>>>>>> +       read_unlock_bh(&mon->lock);
>>>>>>> +       dom->ack_gen = htons(state->peer_gen);
>>>>>>> +}
>>>>>>> +
>>>>>>> +bool tipc_mon_is_probed(struct net *net, u32 addr,
>>>>>>> +                       struct tipc_mon_state *state,
>>>>>>> +                       int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
>>>>>>> +       struct tipc_peer *peer;
>>>>>>> +
>>>>>>> +       if (mon->disabled)
>>>>>>> +               return false;
>>>>>>> +
>>>>>>> +       if (!state->probed &&
>>>>>>> +           !less(state->list_gen, mon->list_gen) &&
>>>>>>> +           !less(state->ack_gen, state->gen))
>>>>>>> +               return false;
>>>>>>> +
>>>>>>> +       read_lock_bh(&mon->lock);
>>>>>>> +       peer = get_peer(mon, addr);
>>>>>>> +       if (peer) {
>>>>>>> +               state->probed = less(state->gen, mon->dom_gen);
>>>>>>> +               state->probed |= less(state->ack_gen, state->gen);
>>>>>>> +               state->probed |= peer->confirm;
>>>>>>> +               peer->confirm = 0;
>>>>>>> +               state->monitored = peer->is_local;
>>>>>>> +               state->monitored |= peer->is_head;
>>>>>>> +               state->monitored |= !peer->head_map;
>>>>>>> +               state->list_gen = mon->list_gen;
>>>>>>> +               state->gen = mon->dom_gen;
>>>>>>> +       }
>>>>>>> +       read_unlock_bh(&mon->lock);
>>>>>>> +       return state->probed || state->monitored;
>>>>>>> +}
>>>>>>> +
>>>>>>> +int tipc_mon_create(struct net *net, int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_net *tn = tipc_net(net);
>>>>>>> +       struct tipc_monitor *mon;
>>>>>>> +       struct tipc_peer *self;
>>>>>>> +       struct tipc_mon_domain *dom;
>>>>>>> +
>>>>>>> +       if (tn->monitors[bearer_id])
>>>>>>> +               return 0;
>>>>>>> +
>>>>>>> +       mon = kzalloc(sizeof(*mon), GFP_ATOMIC);
>>>>>>> +       self = kzalloc(sizeof(*self), GFP_ATOMIC);
>>>>>>> +       dom = kzalloc(sizeof(*dom), GFP_ATOMIC);
>>>>>>> +       if (!mon || !self || !dom) {
>>>>>>> +               kfree(mon);
>>>>>>> +               kfree(self);
>>>>>>> +               kfree(dom);
>>>>>>> +               return -ENOMEM;
>>>>>>> +       }
>>>>>>> +       tn->monitors[bearer_id] = mon;
>>>>>>> +       rwlock_init(&mon->lock);
>>>>>>> +       mon->net = net;
>>>>>>> +       mon->peer_cnt = 1;
>>>>>>> +       mon->self = self;
>>>>>>> +       self->domain = dom;
>>>>>>> +       self->addr = tipc_own_addr(net);
>>>>>>> +       self->is_up = true;
>>>>>>> +       self->is_head = true;
>>>>>>> +       INIT_LIST_HEAD(&self->list);
>>>>>>> +       return 0;
>>>>>>> +}
>>>>>>> +
>>>>>>> +void tipc_mon_disable(struct net *net, int bearer_id)
>>>>>>> +{
>>>>>>> +       tipc_monitor(net, bearer_id)->disabled = true;
>>>>>>> +}
>>>>>>> +
>>>>>>> +void tipc_mon_delete(struct net *net, int bearer_id)
>>>>>>> +{
>>>>>>> +       struct tipc_net *tn = tipc_net(net);
>>>>>>> +       struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
>>>>>>> +       struct tipc_peer *self = get_self(net, bearer_id);
>>>>>>> +       struct tipc_peer *peer, *tmp;
>>>>>>> +
>>>>>>> +       write_lock_bh(&mon->lock);
>>>>>>> +       tn->monitors[bearer_id] = NULL;
>>>>>>> +       list_for_each_entry_safe(peer, tmp, &self->list, list) {
>>>>>>> +               list_del(&peer->list);
>>>>>>> +               hlist_del(&peer->hash);
>>>>>>> +               kfree(peer->domain);
>>>>>>> +               kfree(peer);
>>>>>>> +       }
>>>>>>> +       kfree(self->domain);
>>>>>>> +       kfree(self);
>>>>>>> +       write_unlock_bh(&mon->lock);
>>>>>>> +       tn->monitors[bearer_id] = NULL;
>>>>>>> +       kfree(mon);
>>>>>>> +}
>>>>>>> diff --git a/net/tipc/monitor.h b/net/tipc/monitor.h
>>>>>>> new file mode 100644
>>>>>>> index 0000000..7a25541
>>>>>>> --- /dev/null
>>>>>>> +++ b/net/tipc/monitor.h
>>>>>>> @@ -0,0 +1,72 @@
>>>>>>> +/*
>>>>>>> + * net/tipc/monitor.h
>>>>>>> + *
>>>>>>> + * Copyright (c) 2015, Ericsson AB
>>>>>>> + * All rights reserved.
>>>>>>> + *
>>>>>>> + * Redistribution and use in source and binary forms, with or without
>>>>>>> + * modification, are permitted provided that the following conditions 
>>>>>>> are
>>>> met:
>>>>>>> + *
>>>>>>> + * 1. Redistributions of source code must retain the above copyright
>>>>>>> + *    notice, this list of conditions and the following disclaimer.
>>>>>>> + * 2. Redistributions in binary form must reproduce the above copyright
>>>>>>> + *    notice, this list of conditions and the following disclaimer in 
>>>>>>> the
>>>>>>> + *    documentation and/or other materials provided with the
>> distribution.
>>>>>>> + * 3. Neither the names of the copyright holders nor the names of its
>>>>>>> + *    contributors may be used to endorse or promote products derived
>>>> from
>>>>>>> + *    this software without specific prior written permission.
>>>>>>> + *
>>>>>>> + * Alternatively, this software may be distributed under the terms of 
>>>>>>> the
>>>>>>> + * GNU General Public License ("GPL") version 2 as published by the 
>>>>>>> Free
>>>>>>> + * Software Foundation.
>>>>>>> + *
>>>>>>> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>>>>>> CONTRIBUTORS "AS IS"
>>>>>>> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
>>>>>> LIMITED TO, THE
>>>>>>> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
>>>>>> PARTICULAR PURPOSE
>>>>>>> + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
>>>>>> CONTRIBUTORS BE
>>>>>>> + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
>>>> OR
>>>>>>> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
>>>>>> PROCUREMENT OF
>>>>>>> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
>> OR
>>>>>> BUSINESS
>>>>>>> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
>> LIABILITY,
>>>>>> WHETHER IN
>>>>>>> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
>>>>>> OTHERWISE)
>>>>>>> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
>>>>>> ADVISED OF THE
>>>>>>> + * POSSIBILITY OF SUCH DAMAGE.
>>>>>>> + */
>>>>>>> +
>>>>>>> +#ifndef _TIPC_MONITOR_H
>>>>>>> +#define _TIPC_MONITOR_H
>>>>>>> +
>>>>>>> +/* struct tipc_mon_state: link instance's cache of monitor list and 
>>>>>>> domain
>>>>>> state
>>>>>>> + * @list_gen: current generation of this node's monitor list
>>>>>>> + * @gen: current generation of this node's local domain
>>>>>>> + * @peer_gen: most recent domain generation received from peer
>>>>>>> + * @ack_gen: most recent generation of self's domain acked by peer
>>>>>>> + * @monitored: peer endpoint should continuously monitored
>>>>>>> + * @probed: peer endpoint should be temporarily probed for potential
>> loss
>>>>>>> + */
>>>>>>> +struct tipc_mon_state {
>>>>>>> +       u16 list_gen;
>>>>>>> +       u16 gen;
>>>>>>> +       u16 peer_gen;
>>>>>>> +       u16 ack_gen;
>>>>>>> +       bool monitored;
>>>>>>> +       bool probed;
>>>>>>> +};
>>>>>>> +
>>>>>>> +int tipc_mon_create(struct net *net, int bearer_id);
>>>>>>> +void tipc_mon_disable(struct net *net, int bearer_id);
>>>>>>> +void tipc_mon_delete(struct net *net, int bearer_id);
>>>>>>> +
>>>>>>> +void tipc_mon_peer_up(struct net *net, u32 addr, int bearer_id);
>>>>>>> +void tipc_mon_peer_down(struct net *net, u32 addr, int bearer_id);
>>>>>>> +void tipc_mon_prep(struct net *net, void *data, int *dlen,
>>>>>>> +                  struct tipc_mon_state *state, int bearer_id);
>>>>>>> +void tipc_mon_rcv(struct net *net, void *data, u16 dlen, u32 addr,
>>>>>>> +                 struct tipc_mon_state *state, int bearer_id);
>>>>>>> +bool tipc_mon_is_probed(struct net *net, u32 addr,
>>>>>>> +                       struct tipc_mon_state *state,
>>>>>>> +                       int bearer_id);
>>>>>>> +void tipc_mon_remove_peer(struct net *net, u32 addr, int bearer_id);
>>>>>>> +
>>>>>>> +extern const int tipc_max_domain_size;
>>>>>>> +#endif
>>>>>>> diff --git a/net/tipc/node.c b/net/tipc/node.c
>>>>>>> index 68d9f7b..43f2d78 100644
>>>>>>> --- a/net/tipc/node.c
>>>>>>> +++ b/net/tipc/node.c
>>>>>>> @@ -40,6 +40,7 @@
>>>>>>>  #include "name_distr.h"
>>>>>>>  #include "socket.h"
>>>>>>>  #include "bcast.h"
>>>>>>> +#include "monitor.h"
>>>>>>>  #include "discover.h"
>>>>>>>  #include "netlink.h"
>>>>>>>
>>>>>>> @@ -191,16 +192,6 @@ int tipc_node_get_mtu(struct net *net, u32 addr,
>>>> u32
>>>>>> sel)
>>>>>>>         tipc_node_put(n);
>>>>>>>         return mtu;
>>>>>>>  }
>>>>>>> -/*
>>>>>>> - * A trivial power-of-two bitmask technique is used for speed, since 
>>>>>>> this
>>>>>>> - * operation is done for every incoming TIPC packet. The number of hash
>>>> table
>>>>>>> - * entries has been chosen so that no hash chain exceeds 8 nodes and 
>>>>>>> will
>>>>>>> - * usually be much smaller (typically only a single node).
>>>>>>> - */
>>>>>>> -static unsigned int tipc_hashfn(u32 addr)
>>>>>>> -{
>>>>>>> -       return addr & (NODE_HTABLE_SIZE - 1);
>>>>>>> -}
>>>>>>>
>>>>>>>  static void tipc_node_kref_release(struct kref *kref)
>>>>>>>  {
>>>>>>> @@ -265,6 +256,7 @@ static void tipc_node_write_unlock(struct
>> tipc_node
>>>> *n)
>>>>>>>         u32 addr = 0;
>>>>>>>         u32 flags = n->action_flags;
>>>>>>>         u32 link_id = 0;
>>>>>>> +       u32 bearer_id;
>>>>>>>         struct list_head *publ_list;
>>>>>>>
>>>>>>>         if (likely(!flags)) {
>>>>>>> @@ -274,6 +266,7 @@ static void tipc_node_write_unlock(struct
>> tipc_node
>>>> *n)
>>>>>>>         addr = n->addr;
>>>>>>>         link_id = n->link_id;
>>>>>>> +       bearer_id = link_id & 0xffff;
>>>>>>>         publ_list = &n->publ_list;
>>>>>>>
>>>>>>>         n->action_flags &= ~(TIPC_NOTIFY_NODE_DOWN |
>>>>>> TIPC_NOTIFY_NODE_UP |
>>>>>>> @@ -287,13 +280,16 @@ static void tipc_node_write_unlock(struct
>>>> tipc_node
>>>>>> *n)
>>>>>>>         if (flags & TIPC_NOTIFY_NODE_UP)
>>>>>>>                 tipc_named_node_up(net, addr);
>>>>>>>
>>>>>>> -       if (flags & TIPC_NOTIFY_LINK_UP)
>>>>>>> +       if (flags & TIPC_NOTIFY_LINK_UP) {
>>>>>>> +               tipc_mon_peer_up(net, addr, bearer_id);
>>>>>>>                 tipc_nametbl_publish(net, TIPC_LINK_STATE, addr, addr,
>>>>>>>                                      TIPC_NODE_SCOPE, link_id, addr);
>>>>>>> -
>>>>>>> -       if (flags & TIPC_NOTIFY_LINK_DOWN)
>>>>>>> +       }
>>>>>>> +       if (flags & TIPC_NOTIFY_LINK_DOWN) {
>>>>>>> +               tipc_mon_peer_down(net, addr, bearer_id);
>>>>>>>                 tipc_nametbl_withdraw(net, TIPC_LINK_STATE, addr,
>>>>>>>                                       link_id, addr);
>>>>>>> +       }
>>>>>>>  }
>>>>>>>
>>>>>>>  struct tipc_node *tipc_node_create(struct net *net, u32 addr, u16
>>>> capabilities)
>>>>>>> @@ -674,6 +670,7 @@ static void tipc_node_link_down(struct tipc_node
>> *n,
>>>>>> int bearer_id, bool delete)
>>>>>>>         struct tipc_link *l = le->link;
>>>>>>>         struct tipc_media_addr *maddr;
>>>>>>>         struct sk_buff_head xmitq;
>>>>>>> +       int old_bearer_id = bearer_id;
>>>>>>>
>>>>>>>         if (!l)
>>>>>>>                 return;
>>>>>>> @@ -693,6 +690,8 @@ static void tipc_node_link_down(struct tipc_node
>> *n,
>>>>>> int bearer_id, bool delete)
>>>>>>>                 tipc_link_fsm_evt(l, LINK_RESET_EVT);
>>>>>>>         }
>>>>>>>         tipc_node_write_unlock(n);
>>>>>>> +       if (delete)
>>>>>>> +               tipc_mon_remove_peer(n->net, n->addr,
>> old_bearer_id);
>>>>>>>         tipc_bearer_xmit(n->net, bearer_id, &xmitq, maddr);
>>>>>>>         tipc_sk_rcv(n->net, &le->inputq);
>>>>>>>  }


------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
tipc-discussion mailing list
tipc-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to