Hi Erik,

Thanks again for replying. I am sorry, but I am not able to correlate your
explaination with my requirement completely. I am using TRANSIENT
 durability qos for delayed message. But this will only provide the data
till the DDS service is running, am I correct? To make the data exist
beyond that DDS service lifespan ( like in case DDS crashes or the node
running DDS crashes), we may need to use PERSISTENT durability or for what
I experimented, use auto_dispose_unregistered_instances flag for that,
right?..  But with PERSISTENCE, the problem is that I don't know if DDS
currently provides any handle to control the resource usage which comes up
with continuous data in the resource file. Like data once added to this
resource xml file would be hard to be deleted.. I did not find any option
for that till now..

if I have to keep the message alive beyond DDS lifespan, is it possible
without PERSISTENCE and auto_dispose_unregistered_instances flag,
and just using TRANSIENT or wait_for_historical_data?

And the other point I was thinking regarding the disposing part was that,
my problem would still be the uncertainity with the message delivery only.
If I write a message to all partitions and dispose the message, in realtime
it might reach the other nodes with NOT_ALIVE_NO_WRITERS
or NOT_ALIVE_DISPOSED sample states at Subscriber end, but with historical
data, the scenario will change, I guess. The messages won't be alive for
the late joiners. Even for selectively disposing message instances for the
nodes which have received the message, I need to the the delivery status.
Please let me know if my assumption is wrong about all this.

I will do give a try to writer detachment option and see how it works out..
Thanks again for a detailed explaination.

On Wed, May 9, 2012 at 3:10 PM, Erik Hendriks
<[email protected]>wrote:

>  Hi Ravi,
>
>
>
> Reaching a node that is not yet up when a message is transmitted is the
> task of the durability service: it keeps track of all TRANSIENT/PERSISTENT
> messages that have not yet been disposed in case a late-joiner expresses a
> need for them. A late joining node can receive this initial data from the
> durability service by setting its durability qos to  TRANSIENT/PERSISTENT,
> or by explicitly invoking the wait_for_historical_data operation.
>
>
>
> If you write your data in 1 or more partitions, this data is also stored
> by the durability service for only those partitions to which you wrote
> them. A late joiner that does not attach to one of the partitions
> containing the transient data, will not receive this initial data.
>
>
>
> Now what happens when a writer transmits an instance over 2 partitions,
> and then detaches from the first partition and then disposes that instance
> only in the remaining (2nd) partition? Let’s first talk about the effect
> of the detachment of the partition. Basically what will happen is that the
> writer, by changing its Qos, is changing its connectivity to the Readers.
> In such cases, it will behave for all disconnected readers like it
> unregistered all its instances (taking into account the
> auto_dispose_unregistered_instances flag). That means that the disconnected
> readers will see an instance_state NOT_ALIVE_NO_WRITERS when the flag is
> FALSE (assuming no other writers have currently registered the same
> instances), and an instance_state NOT_ALIVE_DISPOSED when the flag is TRUE.
>
>
>
> The writer now sends his DISPOSE over the remaining (2nd) partition.
> Since it is no longer connected to Readers attached to only the first
> partition, they will not receive the dispose and so their instance state
> will no longer change. The Readers still connected to the 2nd partition
> will receive the dispose and process it accordingly.
>
>
>
> What is the consequence of this for the Durability Service? Basically the
> Durability Service acts like a normal Reader that subscribes to all
> TRANSIENT data for each individual partition. That means that when the data
> is published into 2 separate partitions, the Durability Service stores 2
> separate copies: 1 for each partition. When the writer disconnects from the
> first partition, the Durability Service will disconnect the writer from
> that partition and determine, based on the value of the Writers’s
> auto_dispose_unregistered_instances flag, whether it needs to dispose all
> data originating from that writer. If this is the case, the copy of each
> sample in the first partition will be discarded from the durability
> service, but this has no effect on the copies of each sample in the 
> 2nd(remaining) partition.
>
>
>
> I hope this answers your question adequately enough.
>
>
> With best regards,
> Erik
>
>
>
> *Erik Hendriks*
> Sr. Software Engineer
>
> Email: [email protected]
> Tel:     +31-74-247-2575
> Fax:    +31-74-247-2571
> Web:   www.prismtech.com
>
> PrismTech is a global leader in standards-based, performance-critical
> middleware. Our products enable our OEM, Systems Integrator, and End User
> customers to build and optimize high-performance systems primarily for
> Mil/Aero, Communications, Industrial, and Financial Markets.
>
>
>
> Date: Mon, 23 Apr 2012 21:02:26 +0530
>
> From: Ravi Chandran <[email protected]>
>
> Subject: Re: [OSPL-Dev] Disposing a message published with
>
>       unregistered instance autodispose set to false
>
> To: OpenSplice DDS Developer Mailing List <[email protected]>
>
> Message-ID:
>
>       <CANVvbaa_NWgJY8aCPaBmNvKpgt33RF26WNzreX0T-sd=tpe...@mail.gmail.com>
>
> Content-Type: text/plain; charset="windows-1252"
>
>
>
> Thanks Erik for this excellent explaination, this was very helpful for me
> in understanding the dispose vs unregister working. Okay, now coming back
> to my problem, I have some nodes with their own partitions, and at random
> point of time, I will be publishing some message to some or all of these
> partitions. Now, one of the scenario is that, suppose some of these nodes
> are down when i published the message to their partition ( I am using
> reliable and transient QOS ), how to make the messages reach the nodes that
> are down.
>
>
>
> Its a typical case of getting historical data, I tried experimenting with
> just two nodes, and I found that when I was not using "autodispose instance
> = false", while unregistering the instance ( that was before reading your
> explaination ) and if I start DDS on the second node, I was not getting the
> message on subscriber end.
>
>
>
> But with autodispose = false, whenever I was restarting DDS, I was getting
> the same published messages again. and this was happening repeatedly. Now,
> the reason I am not disposing the message or setting the
> autopurge_nowriter_samples_delay and autopurge_disposed_samples_delay
> values which you mentioned is because I don't know if I dispose the
> message, whether it will dispose the messages for all the partitions for
> which Node1 published the message or not.
>
>
>
> What are the ways to clear out all the instances of a sample from Reader
> side once I receive the intended message?
>
>
>
>
>
>
>
> _______________________________________________
> OpenSplice DDS Developer Mailing List
> [email protected]
> Subscribe / Unsubscribe
> http://dev.opensplice.org/mailman/listinfo/developer
>
>


-- 
Thanks & Regards
Ravi
_______________________________________________
OpenSplice DDS Developer Mailing List
[email protected]
Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer

Reply via email to