Hi Andrea,
I think you have to decouple the usage/use-cases of using unicast and using networkPartitions. · OpenSplice NetworkPartitions are a means to physically partition the communication-space (and by means of mapping 'relate' this to the logical partitioning of the global-data-space by means of DDS-partitions). · Unicast addressing is one of the communication-methods that can be utilized to get data distributed within a networkPartition (or the 'GlobalPartition' if there are no explicit networkPartitions defined). W.r.t. unicast-addressing there is still another distinct OpenSplice-DDS feature in that we also support a dedicated ‘dynamic unicast-discovery’ mechanism in OpenSplice: · Given the size and (unicast) protocol restrictions of many large-scale/WAN systems, a discovery mechanism is required where the scalability of the dynamic system is ensured whilst minimizing the communication overhead of the required discovery process. For these reasons OpenSplice DDS provides a dynamic unicast-discovery protocol where the physical network can be *overlaid* with a notion of ‘Roles’ and related communication-scopes such that only nodes within a defined ‘* scope-of-interest*’ will be automatically discovered and their state maintained in a distributed/fault-tolerant manner by the OpenSplice DDS middleware. Other DDS-vendors either rely on a protocol that requires multicast for discovery of all DDS-entities (*rather than communication-nodes*) or rely on a centralized service that can be/become a single-point-of-failure in the dynamic system. Finally, especially in hierarchical systems, a scalable discovery protocol (*such as in OpenSplice DDS*) actually PREVENTS ‘*horizontal’* communication between physically connected endpoint (e.g. nodes on the same ‘level’, yet in another ‘branch’ of a hierarchical system) even if from a DDS-perspective they share interest in the same information (*topic/partitions*). Without a clear notion of (*hierarchical*) ‘role’ and ‘scope’, other DDS-implementations are likely to ‘blow-up’ the underlying platform with discovery activities/traffic as information will start flowing ‘horizontally’ between nodes that are on ‘the same’ hierarchical level (*yet belong to different ‘branches’*) in combination with protocols that require each individual application-level communication-endpoint to be discovered and its state maintained (by individual heartbeats). -Hans Hans van 't Hag OpenSplice DDS Product Manager PrismTech Netherlands Email: [email protected] Tel: +31742472572 Fax: +31742472571 Gsm: +31624654078 PrismTech is a global leader in standards-based, performance-critical middleware. Our products enable our OEM, Systems Integrator, and End User customers to build and optimize high-performance systems primarily for Mil/Aero, Communications, Industrial, and Financial Markets. -----Original Message----- From: [email protected] [mailto: [email protected]] On Behalf Of Andrea Reale Sent: Friday, January 20, 2012 11:20 AM To: [email protected] Subject: Re: [OSPL-Dev] Network partitioning and discovery Hi Hans, thanks for your very clear answer. So, if I did not misunderstood your explanation, does this practically mean that the main use cases for defining network partitions associated with unicast addresses are those where using multicast is made impossible due to administration related issues (e.g., multicast is filtered?). Are there any other use cases that I am not seeing? Thanks again for your support. Andrea On Thu, 2012-01-19 at 14:16 +0100, Hans van't Hag wrote: > Hi Andrea, > > Sorry for the late reply .. anyhow, yes, this behavior is normal as you > explicitly state that data sent to logical DDS-partition "part" should be > 'pushed' out to the NetworkPartition "part" which is defined as the > N2/N2/N3/N4 unicast address-set. > > If you have discovery enabled (which you have), there is the > 'optimziation' that as long as there's nobody interested in the data, > OpenSplice won't even bother to send it on the wire, yet as soon as > there's one interested node, it WILL be sent to the wire following the > partition-definitions as been set up. > > Technically it could be possible of course to optimize the algorithm, yet > that's currently not in place in the community edition's RT-networking > service. > > > > Hans van 't Hag > OpenSplice DDS Product Manager > PrismTech Netherlands > Email: [email protected] > Tel: +31742472572 > Fax: +31742472571 > Gsm: +31624654078 > > PrismTech is a global leader in standards-based, performance-critical > middleware. Our products enable our OEM, Systems Integrator, and End User > customers to build and optimize high-performance systems primarily for > Mil/Aero, Communications, Industrial, and Financial Markets. > > -----Original Message----- > From: [email protected] > [mailto:[email protected]] On Behalf Of Andrea Reale > Sent: Wednesday, January 11, 2012 11:48 AM > To: [email protected] > Subject: Re: [OSPL-Dev] Network partitioning and discovery > > While writing the previous post I made a mistake in copying the excerpt > of my configuration file. > The actual one I am using is: > > <Partitioning> > <GlobalPartition Address="224.0.0.42"/> > <NetworkPartitions> > <NetworkPartition Address="N2 N3 N4 N5" Connected="true" > Name="part"/> > </NetworkPartitions> > <PartitionMappings> > <PartitionMapping DCPSPartitionTopic="part.*" > NetworkPartition="part"/> > </PartitionMappings> > </Partitioning> > > Sorry for the double post, and thanks again for any help you will > provide. > > Regards, > Andrea > > > On Wed, 2012-01-11 at 11:34 +0100, Andrea Reale wrote: > > Hello everyone. > > > > I am confused on how the static discovery works related to network > > partitioning. In particular, here is my use case. > > > > On one node (call it N1), I run a domain participant with one data > > writer which writes some data to a topic 'T' in partition 'part'. > > The reliability QoS for the data writer is best-effort with KEEP_LAST > > history, and history.depth = 1. > > > > The ospl configuration for that node (N1) for what concerns network > > partitions is as follows: > > > > ... > > <Partitioning> > > <GlobalPartition Address="224.0.0.42"/> > > <NetworkPartitions> > > <NetworkPartition Address="N2 N3 N4 N5" Connected="true" > > Name="part"/> > > </NetworkPartitions> > > <PartitionMappings> > > <PartitionMapping DCPSPartitionTopic="part.*" > > NetworkPartition="inputoutput"/> > > </PartitionMappings> > > </Partitioning> > > ... > > > > N2, N3, N4, and N5 are the unicast ip addresses of other four potential > > domain participants. > > > > Now, if no data reader matching the data writer on N1 is started in the > > domain I see no traffic going out from N1 as one would expect. > > However, if I start exactly one data reader on -- for example -- N2 I > > see that N1 generates UDP traffic towards ALL the hosts in the partition > > (i.e., N2, N3, N4, N5) even though no opensplice instance is running on > > N3, N4 and N5. The destination port of these messages is 53370, the port > > of the best-effort channel. > > > > Is the behaviour normal? I would have expected that no traffic was > > generated towards the nodes not running opensplice... > > > > > > Thanks, > > andrea > > > > > > _______________________________________________ > > OpenSplice DDS Developer Mailing List > > [email protected] > > Subscribe / Unsubscribe > http://dev.opensplice.org/mailman/listinfo/developer > > > _______________________________________________ > OpenSplice DDS Developer Mailing List > [email protected] > Subscribe / Unsubscribe > http://dev.opensplice.org/mailman/listinfo/developer > > _______________________________________________ > OpenSplice DDS Developer Mailing List > [email protected] > Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer _______________________________________________ OpenSplice DDS Developer Mailing List [email protected] Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________ OpenSplice DDS Developer Mailing List [email protected] Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer
