Hi Val,

I was hoping to be able to use the blocking IgniteQueue.take() to achieve the 
desirable "push" semantics.

Andrey

> Date: Mon, 27 Jul 2015 11:43:13 -0700
> Subject: Re: Continuous queries changes
> From: [email protected]
> To: [email protected]
> 
> Andrey,
> 
> I think your approach works, but it requires periodical polling from queue.
> Continuous queries provide an ability to get push notifications for
> updates, which from my experience is critical for some use cases.
> 
> -Val
> 
> On Mon, Jul 27, 2015 at 7:35 AM, Andrey Kornev <[email protected]>
> wrote:
> 
> > I wonder if the same result (guaranteed delivery of CQ notifications) can
> > be achieved entirely in the "user space" using the public Ignite API only?
> >
> > For example:
> > - start a server-side CQ and have the listener push the notifications into
> > an IgniteQueue.
> > - have the client connect to the queue and start receiving the
> > notifications.
> >
> > Regards
> > Andrey
> >
> > > From: [email protected]
> > > Date: Sun, 26 Jul 2015 22:15:09 -0700
> > > Subject: Re: Continuous queries changes
> > > To: [email protected]
> > >
> > > On Sat, Jul 25, 2015 at 8:07 AM, Andrey Kornev <[email protected]
> > >
> > > wrote:
> > >
> > > > Val,
> > > >
> > > > I'm sorry for being obtuse. :)
> > > >
> > > > I was simply wondering if the queue is going to be holding all
> > unfiltered
> > > > events per partition or will there be a queue per continuous query
> > instance
> > > > per partition? Or, is it going to be arranged some other way?
> > > >
> > >
> > > I believe that backup queues will have the same filters as primary
> > queues.
> > >
> > >
> > > > Also, in order to know when it's ok to remove an event from the backup
> > > > queue, wouldn't this approach require maintaining a queue for each
> > > > connected client and having to deal with potentially  unbounded queue
> > > > growth if a client struggles to keep up or simply stops acking?
> > > >
> > >
> > > I think the policy for backups should be no different as for the
> > primaries.
> > > As far as slow clients, Ignite is capable to automatically disconnect
> > them:
> > > http://s.apache.org/managing-slow-clients
> > >
> > > Isn't this feature getting Ignite into the murky waters of the message
> > > > brokers and guaranteed once-only message delivery with all the
> > complexity
> > > > and overhead that come with it? Besides in some cases, it's doesn't
> > really
> > > > matter if some updates are missing, while in others it is only
> > necessary to
> > > > be able to detect a missing update. I wouldn't want to have to pay for
> > > > something I don't need...
> > > >
> > >
> > > I believe that the new proposed approach will be optional and you will
> > > still be able to get event notifications in non-fault-tolerant manner the
> > > old way.
> > >
> > >
> > > >
> > > > Thanks
> > > > Andrey
> > > >
> > > > > Date: Fri, 24 Jul 2015 23:40:15 -0700
> > > > > Subject: Re: Continuous queries changes
> > > > > From: [email protected]
> > > > > To: [email protected]
> > > > >
> > > > > Andrey,
> > > > >
> > > > > I mean the queue of update events that is collected on backup nodes
> > and
> > > > > flushed to listening clients in case of topology change.
> > > > >
> > > > > -Val
> > > > >
> > > > > On Fri, Jul 24, 2015 at 3:16 PM, Andrey Kornev <
> > [email protected]
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Val,
> > > > > >
> > > > > > Could you please elaborate what you mean by "updates queue" you
> > plan to
> > > > > > maintain on the primary and backup nodes?
> > > > > >
> > > > > > Thanks
> > > > > > Andrey
> > > > > >
> > > > > > > Date: Fri, 24 Jul 2015 17:51:48 +0300
> > > > > > > Subject: Re: Continuous queries changes
> > > > > > > From: [email protected]
> > > > > > > To: [email protected]
> > > > > > >
> > > > > > > Val,
> > > > > > >
> > > > > > > I have idea on how to clean up backup queue.
> > > > > > >
> > > > > > > 1. Our communication uses acks. So, you can determine [on server
> > > > node]
> > > > > > > whether client received the update from local server or not. I
> > think
> > > > you
> > > > > > > can easily change existing code to get notifications on ack
> > receiving
> > > > > > (this
> > > > > > > way you dont need to introduce your own acks).
> > > > > > > 2. How do you know when evict from backup? Each message that
> > client
> > > > acks
> > > > > > > corresponds to some per-partition long value you talked above
> > (great
> > > > > > idea,
> > > > > > > btw!). Servers can exchange per-partition long value that
> > > > corresponds to
> > > > > > > the latest acked message and that's the way how backups can
> > safely
> > > > evict
> > > > > > > from the queue.
> > > > > > >
> > > > > > > Let me know if you have questions.
> > > > > > >
> > > > > > > --Yakov
> > > > > > >
> > > > > > > 2015-07-23 8:53 GMT+03:00 Valentin Kulichenko <
> > > > > > [email protected]
> > > > > > > >:
> > > > > > >
> > > > > > > > Igniters,
> > > > > > > >
> > > > > > > > Based on discussions with our users I came to conclusion that
> > our
> > > > > > > > continuous query implementation is not good enough for use
> > cases
> > > > with
> > > > > > > > strong consistency requirements, because there is a
> > possibility to
> > > > lose
> > > > > > > > updates in case of topology change.
> > > > > > > >
> > > > > > > > So I started working on
> > > > > > https://issues.apache.org/jira/browse/IGNITE-426
> > > > > > > > and I hope to finish in in couple of weeks so that we can
> > include
> > > > it in
> > > > > > > > next release.
> > > > > > > >
> > > > > > > > I have the following design in mind:
> > > > > > > >
> > > > > > > >    - Maintain updates queue on backup node(s) in addition to
> > > > primary
> > > > > > node.
> > > > > > > >    - If primary node crushes, this queue is flushed to
> > listening
> > > > > > clients.
> > > > > > > >    - To avoid notification duplicates we will have a
> > per-partition
> > > > > > update
> > > > > > > >    counter. Once an entry in some partition is updated,
> > counter for
> > > > > > this
> > > > > > > >    partition is incremented on both primary and backups. The
> > value
> > > > of
> > > > > > this
> > > > > > > >    counter is also sent along with the update to the client,
> > which
> > > > also
> > > > > > > >    maintains the copy of this mapping. If at some moment it
> > > > receives an
> > > > > > > > update
> > > > > > > >    with the counter less than in its local map, this update is
> > a
> > > > > > duplicate
> > > > > > > > and
> > > > > > > >    can be discarded.
> > > > > > > >    - Also need to figure out the best way to clean the backup
> > > > queue if
> > > > > > > >    topology is stable. Will be happy to hear any suggestions :)
> > > > > > > >
> > > > > > > > To make all this work we also need to implement
> > > > thread-per-partition
> > > > > > mode
> > > > > > > > in atomic cache, because now updates order on backup nodes can
> > > > differ
> > > > > > from
> > > > > > > > the primary node:
> > https://issues.apache.org/jira/browse/IGNITE-104
> > > > .
> > > > > > I'm
> > > > > > > > already working on this.
> > > > > > > >
> > > > > > > > Feel free to share your thoughts!
> > > > > > > >
> > > > > > > > -Val
> > > > > > > >
> > > > > >
> > > > > >
> > > >
> >
> >
                                          

Reply via email to