Hi Hans,

thanks for your answer, and for making again things much clearer.


> ·        OpenSplice NetworkPartitions are a means to physically
> partition the communication-space (and by means of mapping 'relate'
> this to the logical partitioning of the global-data-space by means of
> DDS-partitions).
> ·        Unicast addressing is one of the communication-methods that
> can be utilized to get data distributed within a networkPartition (or
> the 'GlobalPartition' if there are no explicit networkPartitions
> defined).

I find the OpenSplice feature of mapping networkPartitions to DDS
Partitions/topics very interesting as it allows a clear and controllable
isolation of the network traffic generated by semantically different
sets of data.

However, given the current implementation, I was wandering if there
existed any other reason, except administration/filtering constraints,
to use unicast addressing within a single networkPartition.

For what concerns the second mechanism, i.e. the dynamic discovery, I
totally agree with your considerations about the need of scalable
protocols (not only for what concerns the discovery) to enable the
realization of very large scale systems (both in terms of number of
participants and geographical dispersion).

For what I read about dynamic discovery, it basically provides a mean
for extending the 'GlobalPartition', by querying one or more nodes in
the statically configured ProbeList, with the unicast addresses of some
nodes whose role matches one (or more) given scope expressions.
One thing that it's not really clear to me is the following: does not
the fact that they are all added to the same 'GlobalPartition'
prevent me from achieving the fine grained control over network traffic
that networkPartitions give? I will try to explain myself better with an
example.

Consider for example the simple scenario in which a DomainParticipant
'DP1' on one node creates two publishers: one for partition 'A', and the
other for partition 'B'. For each of those publisher, a data writer is
also created; both the data writers periodically write data on topic
'T'.
Now, imagine that 'DP1' discovers, through dynamic discovery, a list of
remote unicast addresses maybe corresponding to nodes "on the other
side" of a WAN. However, some of these nodes are only interested in the
data instances of topic 'T' in partition 'A', the others in the
instances of topic 'T' in partition 'B'. Since all theses addresses are
in the 'GlobalPartition', isn't every data sample written by DP1
forwarded towards all of them? 
Is there a way to combine the fine control granted by a mechanism such
as 'Opensplice networkPartitions' with the flexibility of the dynamic
discovery?

Kind regards,
Andrea

> 
> ·        Given the size and (unicast) protocol restrictions of many
> large-scale/WAN systems, a discovery mechanism is required where the
> scalability of the dynamic system is ensured whilst minimizing the
> communication overhead of the required discovery process. For these
> reasons OpenSplice DDS provides a dynamic unicast-discovery protocol
> where the physical network can be overlaid with a notion of ‘Roles’
> and related communication-scopes such that only nodes within a defined
> ‘scope-of-interest’ will be automatically discovered and their state
> maintained in a distributed/fault-tolerant manner by the OpenSplice
> DDS middleware. Other DDS-vendors either rely on a protocol that
> requires multicast for discovery of all DDS-entities (rather than
> communication-nodes) or rely on a centralized service that can
> be/become a single-point-of-failure in the dynamic system. Finally,
> especially in hierarchical systems, a scalable discovery protocol
> (such as in OpenSplice DDS) actually PREVENTS ‘horizontal’
> communication between physically connected endpoint (e.g. nodes on the
> same ‘level’, yet in another ‘branch’ of a hierarchical system) even
> if from a DDS-perspective they share interest in the same information
> (topic/partitions). Without a clear notion of (hierarchical) ‘role’
> and ‘scope’, other DDS-implementations are likely to ‘blow-up’ the
> underlying platform with discovery activities/traffic as information
> will start flowing ‘horizontally’ between nodes that are on ‘the same’
> hierarchical level (yet belong to different ‘branches’) in combination
> with protocols that require each individual application-level
> communication-endpoint to be discovered and its state maintained (by
> individual heartbeats).
> 
>  
> 
>  
> 
> -Hans
> 
>  
> 
>  
> 
>  
> 
> Hans van 't Hag
> 
> OpenSplice DDS Product Manager
> 
> PrismTech Netherlands
> 
> Email: [email protected]
> 
> Tel: +31742472572
> 
> Fax: +31742472571
> 
> Gsm: +31624654078
> 
>  
> 
> PrismTech is a global leader in standards-based, performance-critical
> middleware.  Our products enable our OEM, Systems Integrator, and End
> User customers to build and optimize high-performance systems
> primarily for Mil/Aero, Communications, Industrial, and Financial
> Markets.
> 
>  
> 
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of Andrea Reale
> Sent: Friday, January 20, 2012 11:20 AM
> To: [email protected]
> Subject: Re: [OSPL-Dev] Network partitioning and discovery
> 
>  
> 
> Hi Hans,
> 
>  
> 
> thanks for your very clear answer.
> 
> So, if I did not misunderstood your explanation, does this practically
> 
> mean that the main use cases for defining network partitions
> associated
> 
> with unicast addresses are those where using multicast is made
> 
> impossible due to administration related issues (e.g., multicast is
> 
> filtered?).
> 
>  
> 
> Are there any other use cases that I am not seeing?
> 
>  
> 
> Thanks again for your support.
> 
> Andrea
> 
>  
> 
> On Thu, 2012-01-19 at 14:16 +0100, Hans van't Hag wrote:
> 
> > Hi Andrea,
> 
> > 
> 
> > Sorry for the late reply .. anyhow, yes, this behavior is normal as
> you
> 
> > explicitly state that data sent to logical DDS-partition "part"
> should be
> 
> > 'pushed' out to the NetworkPartition "part" which is defined as the
> 
> > N2/N2/N3/N4 unicast address-set.
> 
> > 
> 
> > If you have discovery enabled (which you have), there is the
> 
> > 'optimziation' that as long as there's nobody interested in the
> data,
> 
> > OpenSplice won't even bother to send it on the wire, yet as soon as
> 
> > there's one interested node, it WILL be sent to the wire following
> the
> 
> > partition-definitions as been set up.
> 
> > 
> 
> > Technically it could be possible of course to optimize the
> algorithm, yet
> 
> > that's currently not in place in the community edition's
> RT-networking
> 
> > service.
> 
> > 
> 
> > 
> 
> > 
> 
> > Hans van 't Hag
> 
> > OpenSplice DDS Product Manager
> 
> > PrismTech Netherlands
> 
> > Email: [email protected]
> 
> > Tel: +31742472572
> 
> > Fax: +31742472571
> 
> > Gsm: +31624654078
> 
> > 
> 
> > PrismTech is a global leader in standards-based,
> performance-critical
> 
> > middleware.  Our products enable our OEM, Systems Integrator, and
> End User
> 
> > customers to build and optimize high-performance systems primarily
> for
> 
> > Mil/Aero, Communications, Industrial, and Financial Markets.
> 
> > 
> 
> > -----Original Message-----
> 
> > From: [email protected]
> 
> > [mailto:[email protected]] On Behalf Of Andrea Reale
> 
> > Sent: Wednesday, January 11, 2012 11:48 AM
> 
> > To: [email protected]
> 
> > Subject: Re: [OSPL-Dev] Network partitioning and discovery
> 
> > 
> 
> > While writing the previous post I made a mistake in copying the
> excerpt
> 
> > of my configuration file.
> 
> > The actual one I am using is:
> 
> > 
> 
> > <Partitioning>
> 
> >           <GlobalPartition Address="224.0.0.42"/>
> 
> >           <NetworkPartitions>
> 
> >              <NetworkPartition Address="N2 N3 N4 N5"
> Connected="true"
> 
> >  Name="part"/>
> 
> >           </NetworkPartitions>
> 
> >            <PartitionMappings>
> 
> >             <PartitionMapping DCPSPartitionTopic="part.*"
> 
> > NetworkPartition="part"/>
> 
> >          </PartitionMappings>
> 
> > </Partitioning>
> 
> > 
> 
> > Sorry for the double post, and thanks again for any help you will
> 
> > provide.
> 
> > 
> 
> > Regards,
> 
> > Andrea
> 
> > 
> 
> > 
> 
> > On Wed, 2012-01-11 at 11:34 +0100, Andrea Reale wrote:
> 
> > > Hello everyone.
> 
> > >
> 
> > > I am confused on how the static discovery works related to network
> 
> > > partitioning. In particular, here is my use case.
> 
> > >
> 
> > > On one node (call it N1), I run a domain participant with one data
> 
> > > writer which writes some data to a topic 'T' in partition 'part'.
> 
> > > The reliability QoS for the data writer is best-effort with
> KEEP_LAST
> 
> > > history, and history.depth = 1.
> 
> > >
> 
> > > The ospl configuration for that node (N1) for what concerns
> network
> 
> > > partitions is as follows:
> 
> > >
> 
> > > ...
> 
> > > <Partitioning>
> 
> > >          <GlobalPartition Address="224.0.0.42"/>
> 
> > >          <NetworkPartitions>
> 
> > >             <NetworkPartition Address="N2 N3 N4 N5"
> Connected="true"
> 
> > > Name="part"/>
> 
> > >          </NetworkPartitions>
> 
> > >          <PartitionMappings>
> 
> > >             <PartitionMapping DCPSPartitionTopic="part.*"
> 
> > > NetworkPartition="inputoutput"/>
> 
> > >          </PartitionMappings>
> 
> > > </Partitioning>
> 
> > > ...
> 
> > >
> 
> > > N2, N3, N4, and N5 are the unicast ip addresses of other four
> potential
> 
> > > domain participants.
> 
> > >
> 
> > > Now, if no data reader matching the data writer on N1 is started
> in the
> 
> > > domain I see no traffic going out from N1 as one would expect.
> 
> > > However, if I start exactly one data reader on -- for example --
> N2 I
> 
> > > see that N1 generates UDP traffic towards ALL the hosts in the
> partition
> 
> > > (i.e., N2, N3, N4, N5) even though no opensplice instance is
> running on
> 
> > > N3, N4 and N5. The destination port of these messages is 53370,
> the port
> 
> > > of the best-effort channel.
> 
> > >
> 
> > > Is the behaviour normal? I would have expected that no traffic was
> 
> > > generated towards the nodes not running opensplice...
> 
> > >
> 
> > >
> 
> > > Thanks,
> 
> > > andrea
> 
> > >
> 
> > >
> 
> > > _______________________________________________
> 
> > > OpenSplice DDS Developer Mailing List
> 
> > > [email protected]
> 
> > > Subscribe / Unsubscribe
> 
> > http://dev.opensplice.org/mailman/listinfo/developer
> 
> > 
> 
> > 
> 
> > _______________________________________________
> 
> > OpenSplice DDS Developer Mailing List
> 
> > [email protected]
> 
> > Subscribe / Unsubscribe
> 
> > http://dev.opensplice.org/mailman/listinfo/developer
> 
> > 
> 
> > _______________________________________________
> 
> > OpenSplice DDS Developer Mailing List
> 
> > [email protected]
> 
> > Subscribe / Unsubscribe
> http://dev.opensplice.org/mailman/listinfo/developer
> 
>  
> 
>  
> 
> _______________________________________________
> 
> OpenSplice DDS Developer Mailing List
> 
> [email protected]
> 
> Subscribe / Unsubscribe
> http://dev.opensplice.org/mailman/listinfo/developer
> 
> 
> _______________________________________________
> OpenSplice DDS Developer Mailing List
> [email protected]
> Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer


_______________________________________________
OpenSplice DDS Developer Mailing List
[email protected]
Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer

Reply via email to