Hello,

Thanks Jon! I think this clarified things quite a bit.

BR,
--
 Juhamatti

to 11. lokak. 2018 klo 18.02 Jon Maloy ([email protected]) kirjoitti:

> Hi Juhamatti,
> See below.
>
> > -----Original Message-----
> > From: [email protected] <[email protected]>
> > Sent: October 11, 2018 1:00 AM
> > To: [email protected]
> > Subject: [tipc-discussion] TIPC scalability viewpoints
> >
> > Hello,
> >
> > I have bumped into views that TIPC would not scale for more than hundreds
> > of sockets due to the discovery part of the protocol itself.
>
> There is no such limitation. In our own live clusters we are seeing tens
> of thousands of sockets, -per node.
> The neighbor discovery protocol has nothing to do with sockets, but nodes.
> How far our cluster scales is dependent of traffic and environment.
> In our live systems we are running ~75 nodes, but we have tested up to 800
> node clusters, using the "Overlapping Ring Monitoring" algorithm, which
> kicks in at a cluster size of >32 nodes.
> The binding table and topology (service tracking) service handle sockets,
> that is true, but they are fully capable of handling the amount of sockets
> we are seeing in our systems.
>
> > From TIPC specs I
> > cannot really find any support for that claim, however removing zone-
> > handling may cause all clusters to be pushed into same zone.
>
> Despite what the specification used to claim, there was in reality never
> any multi-cluster or multi-zone support.  By manipulating zone number,
> cluster number or network identity (now called cluster identity) one could
> create multiple clusters on the same network, but those always remained
> isolated islands, in the sense that it was impossible to establish TIPC
> links between the different clusters. Users were (and are) encouraged to
> use TCP instead. With the new addressing scheme we have discussed the
> option of letting individual nodes become member of two or more clusters,
> and hence potentially act as "routers" between those. This is fully
> possible, and probably not very difficult to do. So the day such a
> requirement arrives we will consider it.
>
> > Also, I guess
> > the service discovery and liveliness polling between the clusters could
> be a
> > problem too, if the clusters cannot be fully detached in reality. As I
> am not
> > really aware of the internals of the TIPC implementation, any
> clarifications
> > available for this subject?
>
> As said, there is no inter cluster communication, so this is not an issue.
> The probing/neighbor monitoring between nodes within the *same* cluster was
> an issue al long as we were using full-mesh  (all-to-all) neighbor
> supervision, but this problem has been substantially mitigated by the
> monitoring scheme mentioned above.
>
> BR
> ///jon
>
> >
> > Thanks,
> > --
> >  Juhamatti
> >
> > _______________________________________________
> > tipc-discussion mailing list
> > [email protected]
> > https://lists.sourceforge.net/lists/listinfo/tipc-discussion
>

_______________________________________________
tipc-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to