Hi Ravi,


Reaching a node that is not yet up when a message is transmitted is the
task of the durability service: it keeps track of all TRANSIENT/PERSISTENT
messages that have not yet been disposed in case a late-joiner expresses a
need for them. A late joining node can receive this initial data from the
durability service by setting its durability qos to  TRANSIENT/PERSISTENT,
or by explicitly invoking the wait_for_historical_data operation.



If you write your data in 1 or more partitions, this data is also stored by
the durability service for only those partitions to which you wrote them. A
late joiner that does not attach to one of the partitions containing the
transient data, will not receive this initial data.



Now what happens when a writer transmits an instance over 2 partitions, and
then detaches from the first partition and then disposes that instance only
in the remaining (2nd) partition? Let’s first talk about the effect of the
detachment of the partition. Basically what will happen is that the writer,
by changing its Qos, is changing its connectivity to the Readers. In such
cases, it will behave for all disconnected readers like it unregistered all
its instances (taking into account the auto_dispose_unregistered_instances
flag). That means that the disconnected readers will see an instance_state
NOT_ALIVE_NO_WRITERS when the flag is FALSE (assuming no other writers have
currently registered the same instances), and an instance_state
NOT_ALIVE_DISPOSED when the flag is TRUE.



The writer now sends his DISPOSE over the remaining (2nd) partition. Since
it is no longer connected to Readers attached to only the first partition,
they will not receive the dispose and so their instance state will no
longer change. The Readers still connected to the 2nd partition will
receive the dispose and process it accordingly.



What is the consequence of this for the Durability Service? Basically the
Durability Service acts like a normal Reader that subscribes to all
TRANSIENT data for each individual partition. That means that when the data
is published into 2 separate partitions, the Durability Service stores 2
separate copies: 1 for each partition. When the writer disconnects from the
first partition, the Durability Service will disconnect the writer from
that partition and determine, based on the value of the Writers’s
auto_dispose_unregistered_instances flag, whether it needs to dispose all
data originating from that writer. If this is the case, the copy of each
sample in the first partition will be discarded from the durability
service, but this has no effect on the copies of each sample in the
2nd(remaining) partition.



I hope this answers your question adequately enough.


With best regards,
Erik



*Erik Hendriks*
Sr. Software Engineer

Email: [email protected]
Tel:     +31-74-247-2575
Fax:    +31-74-247-2571
Web:   www.prismtech.com

PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.



Date: Mon, 23 Apr 2012 21:02:26 +0530

From: Ravi Chandran <[email protected]>

Subject: Re: [OSPL-Dev] Disposing a message published with

      unregistered instance autodispose set to false

To: OpenSplice DDS Developer Mailing List <[email protected]>

Message-ID:

      <CANVvbaa_NWgJY8aCPaBmNvKpgt33RF26WNzreX0T-sd=tpe...@mail.gmail.com>

Content-Type: text/plain; charset="windows-1252"



Thanks Erik for this excellent explaination, this was very helpful for me
in understanding the dispose vs unregister working. Okay, now coming back
to my problem, I have some nodes with their own partitions, and at random
point of time, I will be publishing some message to some or all of these
partitions. Now, one of the scenario is that, suppose some of these nodes
are down when i published the message to their partition ( I am using
reliable and transient QOS ), how to make the messages reach the nodes that
are down.



Its a typical case of getting historical data, I tried experimenting with
just two nodes, and I found that when I was not using "autodispose instance
= false", while unregistering the instance ( that was before reading your
explaination ) and if I start DDS on the second node, I was not getting the
message on subscriber end.



But with autodispose = false, whenever I was restarting DDS, I was getting
the same published messages again. and this was happening repeatedly. Now,
the reason I am not disposing the message or setting the
autopurge_nowriter_samples_delay and autopurge_disposed_samples_delay
values which you mentioned is because I don't know if I dispose the
message, whether it will dispose the messages for all the partitions for
which Node1 published the message or not.



What are the ways to clear out all the instances of a sample from Reader
side once I receive the intended message?

<<image001.jpg>>

_______________________________________________
OpenSplice DDS Developer Mailing List
[email protected]
Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer

Reply via email to