Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2014-01-07 Thread Galder Zamarreño

On Dec 19, 2013, at 1:15 PM, Emmanuel Bernard  wrote:

> On Thu 2013-12-19  9:46, Galder Zamarreño wrote:
>>> == Example of continuous query atop remote listeners
>>> 
>>> Thinking about how to implement continuous query atop this
>>> infrastructure I am missing a few things.
>>> 
>>> The primary problem is that I don't want to enlist a filter id per
>>> continuous query I want to run. Not only that but I'd love to be able to
>>> add a continuous query on the fly and disable it on the fly as well per
>>> client. For that filters and converters are not flexible enough.
>>> 
>>> What is missing is the ability to pass parameters from the client to
>>> the remote filter and remote converter. Parameters should be provided
>>> *per client*. Say Client 1 register the continuous query listener with
>>> "where age > 19" and client 2 registers the CQ listener with "where name
>>> = emmanuel". The filter knowing for which client it filters, it will be 
>>> able to only
>>> return the keys that match the query.
>> 
>> This all sounds a bit like remote code exectution to me? You're asking for 
>> the client to pass some kind of executable thing that acts as a filter. 
>> That's a separate feature IMO, which I believe @Tristan is looking into. 
>> Once that's in place, I'm happy to enhance stuff in the remote event side to 
>> support it.
> 
> I don't think you are correct.
> This is not remote execution in the sense of arbitrary code driven by
> the client. Remote execution will likely be triggered, render a
> result and stop. It will not send matching events in a continuous fashion.
> Plus remote execution will likely involve dynamic languages and I'm not
> sure we want to go that route for things like continuous query.

Well, it's remote execution of a condition, which is a type of code :)

>From Hot Rod perspective, until remote code execution is in place, we could 
>add a list of N byte[] that are treated as parameters for the filter, and the 
>filter deciphers what those mean. So, in your case, there would be only 1 
>parameter, a byte[], and it would be unmarshalled by the filter into "where 
>age > 19". If multiple clients add the same parameter, we could use the same 
>filter instance, that's assuming equality can be calculated based on the 
>contents of the byte[] by Hot Rod.

WRT your suggestion to activate/deactivate the continous query on the fly, 
can't we achieve that with registration/deregistration of listeners? Or are you 
trying to avoid all the set up involved in sending the listener registration 
stuff around? Adding activate/deactivate would require two new operations.

Cheers,

> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-20 Thread Emmanuel Bernard
On Fri 2013-12-20 13:08, Dan Berindei wrote:
> On Fri, Dec 20, 2013 at 12:20 PM, Emmanuel Bernard
> > OK. I am not sure that the fact that the parameter is a string in both
> > cases and that there is only one parameter and not multiple is
> > particularly relevant
> > to the general problem at hand.
> >
> >
> I don't think so either, I was just asking if you think there is any
> difference between what you're asking for and what Radim was asking for...

To be honest I wrote my email before reading the rest of the thread and
did not pay much attention to Radim's email :) So I'd say maybe ;)
At any rate that's for a different use case, so it gives an additional
data point.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-20 Thread Dan Berindei
On Fri, Dec 20, 2013 at 12:20 PM, Emmanuel Bernard
wrote:

> On Fri 2013-12-20 12:09, Dan Berindei wrote:
> > On Thu, Dec 19, 2013 at 8:43 PM, Emmanuel Bernard <
> emman...@hibernate.org>wrote:
> >
> > > On Thu 2013-12-19 19:57, Dan Berindei wrote:
> > > > On Thu, Dec 19, 2013 at 2:15 PM, Emmanuel Bernard <
> > > emman...@hibernate.org>wrote:
> > > >
> > > > > On Thu 2013-12-19  9:46, Galder Zamarreño wrote:
> > > > > > > == Example of continuous query atop remote listeners
> > > > > > >
> > > > > > > Thinking about how to implement continuous query atop this
> > > > > > > infrastructure I am missing a few things.
> > > > > > >
> > > > > > > The primary problem is that I don't want to enlist a filter id
> per
> > > > > > > continuous query I want to run. Not only that but I'd love to
> be
> > > able
> > > > > to
> > > > > > > add a continuous query on the fly and disable it on the fly as
> > > well per
> > > > > > > client. For that filters and converters are not flexible
> enough.
> > > > > > >
> > > > > > > What is missing is the ability to pass parameters from the
> client
> > > to
> > > > > > > the remote filter and remote converter. Parameters should be
> > > provided
> > > > > > > *per client*. Say Client 1 register the continuous query
> listener
> > > with
> > > > > > > "where age > 19" and client 2 registers the CQ listener with
> "where
> > > > > name
> > > > > > > = emmanuel". The filter knowing for which client it filters, it
> > > will
> > > > > be able to only
> > > > > > > return the keys that match the query.
> > > > > >
> > > > > > This all sounds a bit like remote code exectution to me? You're
> > > asking
> > > > > for the client to pass some kind of executable thing that acts as a
> > > filter.
> > > > > That's a separate feature IMO, which I believe @Tristan is looking
> > > into.
> > > > > Once that's in place, I'm happy to enhance stuff in the remote
> event
> > > side
> > > > > to support it.
> > > > >
> > > > > I don't think you are correct.
> > > > > This is not remote execution in the sense of arbitrary code driven
> by
> > > > > the client. Remote execution will likely be triggered, render a
> > > > > result and stop. It will not send matching events in a continuous
> > > fashion.
> > > > > Plus remote execution will likely involve dynamic languages and
> I'm not
> > > > > sure we want to go that route for things like continuous query.
> > > > >
> > > >
> > > > To be clear, this is exactly the same as the filter parameters that
> Radim
> > > > was asking for, right? From Infinispan's point of view, the filter
> just
> > > > takes a String parameter, and the fact that that string can be
> parsed by
> > > > the filter in a particular language is irrelevant.
> > >
> > > Not sure what string you are talking about. The filterid?
> > >
> > >
> > I was referring to the condition string: "where age > 19", or "where name
> > = emmanuel".
>
> OK. I am not sure that the fact that the parameter is a string in both
> cases and that there is only one parameter and not multiple is
> particularly relevant
> to the general problem at hand.
>
>
I don't think so either, I was just asking if you think there is any
difference between what you're asking for and what Radim was asking for...
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-20 Thread Emmanuel Bernard
On Fri 2013-12-20 12:09, Dan Berindei wrote:
> On Thu, Dec 19, 2013 at 8:43 PM, Emmanuel Bernard 
> wrote:
> 
> > On Thu 2013-12-19 19:57, Dan Berindei wrote:
> > > On Thu, Dec 19, 2013 at 2:15 PM, Emmanuel Bernard <
> > emman...@hibernate.org>wrote:
> > >
> > > > On Thu 2013-12-19  9:46, Galder Zamarreño wrote:
> > > > > > == Example of continuous query atop remote listeners
> > > > > >
> > > > > > Thinking about how to implement continuous query atop this
> > > > > > infrastructure I am missing a few things.
> > > > > >
> > > > > > The primary problem is that I don't want to enlist a filter id per
> > > > > > continuous query I want to run. Not only that but I'd love to be
> > able
> > > > to
> > > > > > add a continuous query on the fly and disable it on the fly as
> > well per
> > > > > > client. For that filters and converters are not flexible enough.
> > > > > >
> > > > > > What is missing is the ability to pass parameters from the client
> > to
> > > > > > the remote filter and remote converter. Parameters should be
> > provided
> > > > > > *per client*. Say Client 1 register the continuous query listener
> > with
> > > > > > "where age > 19" and client 2 registers the CQ listener with "where
> > > > name
> > > > > > = emmanuel". The filter knowing for which client it filters, it
> > will
> > > > be able to only
> > > > > > return the keys that match the query.
> > > > >
> > > > > This all sounds a bit like remote code exectution to me? You're
> > asking
> > > > for the client to pass some kind of executable thing that acts as a
> > filter.
> > > > That's a separate feature IMO, which I believe @Tristan is looking
> > into.
> > > > Once that's in place, I'm happy to enhance stuff in the remote event
> > side
> > > > to support it.
> > > >
> > > > I don't think you are correct.
> > > > This is not remote execution in the sense of arbitrary code driven by
> > > > the client. Remote execution will likely be triggered, render a
> > > > result and stop. It will not send matching events in a continuous
> > fashion.
> > > > Plus remote execution will likely involve dynamic languages and I'm not
> > > > sure we want to go that route for things like continuous query.
> > > >
> > >
> > > To be clear, this is exactly the same as the filter parameters that Radim
> > > was asking for, right? From Infinispan's point of view, the filter just
> > > takes a String parameter, and the fact that that string can be parsed by
> > > the filter in a particular language is irrelevant.
> >
> > Not sure what string you are talking about. The filterid?
> >
> >
> I was referring to the condition string: "where age > 19", or "where name
> = emmanuel".

OK. I am not sure that the fact that the parameter is a string in both
cases and that there is only one parameter and not multiple is particularly 
relevant
to the general problem at hand.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-20 Thread Dan Berindei
On Fri, Dec 13, 2013 at 4:11 PM, Radim Vansa  wrote:

> On 12/13/2013 02:44 PM, Galder Zamarreño wrote:
> > On Dec 6, 2013, at 10:45 AM, Radim Vansa  wrote:
> >
> >> Hi,
> >>
> >> 1) IMO, filtering for specific key is a very important use case.
> Registering a filterId is a very powerful feature, but as long as you don't
> provide runtime parameter for this filter, you cannot implement one-key
> filtering.
> > What do you mean by runtime parameter exactly? Can you give a concrete
> example of what you want to achieve that is not possible with what I've
> written up?
>
> As I stressed, if the client wants to listen for events on key_123456,
> then you can deploy a filter matching key_{number} (and additional
> constraints) but the 123456 is not known at deployment time.
>
> >
> >> 2) setting ack/no ack in listener, and then configuring server-wise
> whether you should ack each / only last event sounds weird. I'd replace the
> boolean with enum { NO_ACK, ACK_EACH, ACK_LAST }.
> > Makes a lot of sense, +1.
> >
> >> 3) should the client provide source id when registering listener or
> when starting RemoteCacheManager? No API for that.
> > Every operation will require a source ID from now on, so clients must
> provide it from first operation sent to the server. From a Java client
> perspective, you'd have this from the start via the configuration.
> >
> >> 4) clustered events design does not specify any means to replicating
> the clustered event listener - all it does is that you register the
> listener on one node and the other nodes then route events to this node,
> until the node dies/deregisters the listener. No replication. Please
> specify, how should it piggyback on clustered events, and how should the
> listener list be replicated.
> > In clustered listeners, the other nodes you talk about are gonna need to
> know about the clustered listeners so that they route events. Some kind of
> information about these clustered listeners will need to be sent around the
> cluster. The exact details are probably implementation details but we have
> a clustered registry already in place for this kind of things. In any case,
> it'd make a lot of sense that both use cases reuse as much as logic in this
> area.
>
> OK, this is probably the desired behaviour, it just is not covered by
> the Clustered Events design draft. Probably something to add - I'll ping
> Mircea about that. And you're right that it would make a lot of sense to
> have shared structure for the listeners, and two implementations of the
> delivery boy (one to the node where a clustered event has been
> registered and second to local component handling HotRod clients).
>
> >
> >> 5) non-acked events: how exactly do you expect the ack data to be
> replicated, and updated? I see three options:
> >> A) Let non-acked list be a part of the listener record in replicated
> cache, and the primary owner which executes the event should update these
> via delta messages. I guess for proper reliability it should add operation
> record synchronously before confirming the operation to the originator, and
> then it might asynchronously remove it after the ack from client. When a
> node becomes primary owner, it should send events to client for all
> non-acked events.
> >> B) Having the non-acked list attached directly to cache entry (updating
> it together with regular backup), and then asynchronously updating the
> non-ack list after ack comes
> >> C) Separate cache for acks by entry keys, similar to B, consistent hash
> synced with the main entry cache
> > Definitely not B. I don't wanna tie the internal cache entry to the
> ACKs. The two should be independent. Either C or A. For C, you'd wished to
> have a single cache for all listeners+caches, but you'd have to think about
> the keys and to have the same consistent hash, you'd have to have same
> keys. A might be better, but you certainly don't want this ACK info in a
> replicated structure. You'd want ACKs in a distributed cache preferably,
> and clustered listener info in the clustered replicated registry.
> There already is some CH implementation which aims at sharing the same
> distribution for all caches, SyncConsistentHash. Is there some problem
> with C and forcing this for the caches? Dan?
>
>
I'm not sure what the exact requirements would be here.

SyncConsistentHashFactory does ensure that the same key is mapped to the
same owners in all the caches using it, most of the time. However, it
requires both caches to have the same members, and since topologies aren't
applied at exactly the same time there will be periods when the owners in
the two caches won't match.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-20 Thread Dan Berindei
On Thu, Dec 19, 2013 at 8:43 PM, Emmanuel Bernard wrote:

> On Thu 2013-12-19 19:57, Dan Berindei wrote:
> > On Thu, Dec 19, 2013 at 2:15 PM, Emmanuel Bernard <
> emman...@hibernate.org>wrote:
> >
> > > On Thu 2013-12-19  9:46, Galder Zamarreño wrote:
> > > > > == Example of continuous query atop remote listeners
> > > > >
> > > > > Thinking about how to implement continuous query atop this
> > > > > infrastructure I am missing a few things.
> > > > >
> > > > > The primary problem is that I don't want to enlist a filter id per
> > > > > continuous query I want to run. Not only that but I'd love to be
> able
> > > to
> > > > > add a continuous query on the fly and disable it on the fly as
> well per
> > > > > client. For that filters and converters are not flexible enough.
> > > > >
> > > > > What is missing is the ability to pass parameters from the client
> to
> > > > > the remote filter and remote converter. Parameters should be
> provided
> > > > > *per client*. Say Client 1 register the continuous query listener
> with
> > > > > "where age > 19" and client 2 registers the CQ listener with "where
> > > name
> > > > > = emmanuel". The filter knowing for which client it filters, it
> will
> > > be able to only
> > > > > return the keys that match the query.
> > > >
> > > > This all sounds a bit like remote code exectution to me? You're
> asking
> > > for the client to pass some kind of executable thing that acts as a
> filter.
> > > That's a separate feature IMO, which I believe @Tristan is looking
> into.
> > > Once that's in place, I'm happy to enhance stuff in the remote event
> side
> > > to support it.
> > >
> > > I don't think you are correct.
> > > This is not remote execution in the sense of arbitrary code driven by
> > > the client. Remote execution will likely be triggered, render a
> > > result and stop. It will not send matching events in a continuous
> fashion.
> > > Plus remote execution will likely involve dynamic languages and I'm not
> > > sure we want to go that route for things like continuous query.
> > >
> >
> > To be clear, this is exactly the same as the filter parameters that Radim
> > was asking for, right? From Infinispan's point of view, the filter just
> > takes a String parameter, and the fact that that string can be parsed by
> > the filter in a particular language is irrelevant.
>
> Not sure what string you are talking about. The filterid?
>
>
I was referring to the condition string: "where age > 19", or "where name
= emmanuel".
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-19 Thread Emmanuel Bernard
On Thu 2013-12-19 19:57, Dan Berindei wrote:
> On Thu, Dec 19, 2013 at 2:15 PM, Emmanuel Bernard 
> wrote:
> 
> > On Thu 2013-12-19  9:46, Galder Zamarreño wrote:
> > > > == Example of continuous query atop remote listeners
> > > >
> > > > Thinking about how to implement continuous query atop this
> > > > infrastructure I am missing a few things.
> > > >
> > > > The primary problem is that I don't want to enlist a filter id per
> > > > continuous query I want to run. Not only that but I'd love to be able
> > to
> > > > add a continuous query on the fly and disable it on the fly as well per
> > > > client. For that filters and converters are not flexible enough.
> > > >
> > > > What is missing is the ability to pass parameters from the client to
> > > > the remote filter and remote converter. Parameters should be provided
> > > > *per client*. Say Client 1 register the continuous query listener with
> > > > "where age > 19" and client 2 registers the CQ listener with "where
> > name
> > > > = emmanuel". The filter knowing for which client it filters, it will
> > be able to only
> > > > return the keys that match the query.
> > >
> > > This all sounds a bit like remote code exectution to me? You're asking
> > for the client to pass some kind of executable thing that acts as a filter.
> > That's a separate feature IMO, which I believe @Tristan is looking into.
> > Once that's in place, I'm happy to enhance stuff in the remote event side
> > to support it.
> >
> > I don't think you are correct.
> > This is not remote execution in the sense of arbitrary code driven by
> > the client. Remote execution will likely be triggered, render a
> > result and stop. It will not send matching events in a continuous fashion.
> > Plus remote execution will likely involve dynamic languages and I'm not
> > sure we want to go that route for things like continuous query.
> >
> 
> To be clear, this is exactly the same as the filter parameters that Radim
> was asking for, right? From Infinispan's point of view, the filter just
> takes a String parameter, and the fact that that string can be parsed by
> the filter in a particular language is irrelevant.

Not sure what string you are talking about. The filterid?
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-19 Thread Dan Berindei
On Thu, Dec 19, 2013 at 2:15 PM, Emmanuel Bernard wrote:

> On Thu 2013-12-19  9:46, Galder Zamarreño wrote:
> > > == Example of continuous query atop remote listeners
> > >
> > > Thinking about how to implement continuous query atop this
> > > infrastructure I am missing a few things.
> > >
> > > The primary problem is that I don't want to enlist a filter id per
> > > continuous query I want to run. Not only that but I'd love to be able
> to
> > > add a continuous query on the fly and disable it on the fly as well per
> > > client. For that filters and converters are not flexible enough.
> > >
> > > What is missing is the ability to pass parameters from the client to
> > > the remote filter and remote converter. Parameters should be provided
> > > *per client*. Say Client 1 register the continuous query listener with
> > > "where age > 19" and client 2 registers the CQ listener with "where
> name
> > > = emmanuel". The filter knowing for which client it filters, it will
> be able to only
> > > return the keys that match the query.
> >
> > This all sounds a bit like remote code exectution to me? You're asking
> for the client to pass some kind of executable thing that acts as a filter.
> That's a separate feature IMO, which I believe @Tristan is looking into.
> Once that's in place, I'm happy to enhance stuff in the remote event side
> to support it.
>
> I don't think you are correct.
> This is not remote execution in the sense of arbitrary code driven by
> the client. Remote execution will likely be triggered, render a
> result and stop. It will not send matching events in a continuous fashion.
> Plus remote execution will likely involve dynamic languages and I'm not
> sure we want to go that route for things like continuous query.
>

To be clear, this is exactly the same as the filter parameters that Radim
was asking for, right? From Infinispan's point of view, the filter just
takes a String parameter, and the fact that that string can be parsed by
the filter in a particular language is irrelevant.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-19 Thread Tristan Tarrant
Hi Galder,

regarding the "Client Identification" paragraph I was thinking of the 
connection there might be with authenticated session ids I describe in 
the security document [1] in order to reduce the potential proliferation 
of identifiers.

In the "security case" it is the server who is generating a unique 
session identifier at the end of a successful auth handshake. Such an 
identifier is then sent back from the client for all subsequent requests 
to avoid re-authentication. My plan was to tie this session id merely to 
the user's principal but this would not allow recognizing a 
dropped/restarted client.

My proposal is therefore that the HotRod protocol should add just one 
element which can serve both purposes.

Tristan

[1] https://github.com/infinispan/infinispan/wiki/Security

On 12/05/2013 05:16 PM, Galder Zamarreño wrote:
> Hi all,
>
> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>
> Thanks a lot for the feedback provided in last thread. It was very 
> constructive feedback :)
>
> I've just finished updating the design document with the feedback provided in 
> the previous email thread. Can you please have another read and let the list 
> know what you think of it?
>
> Side note: The scope has got bigger (with the addition of 
> filters/converters), so we might need to consider whether we want all 
> features in next version, or whether some parts could be branched out to next 
> iterations.
>
> Cheers,
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
>
> Project Lead, Escalante
> http://escalante.io
>
> Engineer, Infinispan
> http://infinispan.org
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-19 Thread Emmanuel Bernard
On Thu 2013-12-19  9:46, Galder Zamarreño wrote:
> > == Example of continuous query atop remote listeners
> > 
> > Thinking about how to implement continuous query atop this
> > infrastructure I am missing a few things.
> > 
> > The primary problem is that I don't want to enlist a filter id per
> > continuous query I want to run. Not only that but I'd love to be able to
> > add a continuous query on the fly and disable it on the fly as well per
> > client. For that filters and converters are not flexible enough.
> > 
> > What is missing is the ability to pass parameters from the client to
> > the remote filter and remote converter. Parameters should be provided
> > *per client*. Say Client 1 register the continuous query listener with
> > "where age > 19" and client 2 registers the CQ listener with "where name
> > = emmanuel". The filter knowing for which client it filters, it will be 
> > able to only
> > return the keys that match the query.
> 
> This all sounds a bit like remote code exectution to me? You're asking for 
> the client to pass some kind of executable thing that acts as a filter. 
> That's a separate feature IMO, which I believe @Tristan is looking into. Once 
> that's in place, I'm happy to enhance stuff in the remote event side to 
> support it.

I don't think you are correct.
This is not remote execution in the sense of arbitrary code driven by
the client. Remote execution will likely be triggered, render a
result and stop. It will not send matching events in a continuous fashion.
Plus remote execution will likely involve dynamic languages and I'm not
sure we want to go that route for things like continuous query.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-19 Thread Galder Zamarreño

On Dec 13, 2013, at 5:08 PM, Emmanuel Bernard  wrote:

> Much better. Some more feedback.
> 
> == Filter and converter
> 
> I am wondering if there is a benefit in separating filters and
> converters. It does add some understanding complexity so a single
> ServerListener with the methods from RemoteFilter / RemoveConverter
> might be better.

I think the two are fairly tight in, so I think that might be a good idea.

> Should filter / converter impls have a @FilterId() / @ConverterId or
> should even the id be configured as late binding?

That falls into the configuration part which I hoped @Tristan would provide 
some insight on how we'd plug these filter/convert impls into the server. If 
they're defined via Infinispan configuration, the ID could be provided via 
there. If they're found via service loader pattern, annotations could be used, 
or callback methods. The former is probably more elegant.

> == Custom event contents
> 
> I an not following, is the custom content always received as byte[] by
> the client?  Or is the generic parameter K in
> RemoteCacheEntryCustomEvent the actual return type of getEventData() ?
> 
> I'd love (in Java) to declare the returned type of the converter via
> generics in the RemoteConverter impl (class SomeRC extends
> RemoteConverter {}) and somehow use that information on
> the client side.

Event data is a byte[] as formatted by the converter on the server side. On the 
client side, I can only expose this as is, as byte[]. The callback would then 
need to disect it somehow.

The K in RemoteCacheEntryCustomEvent should be removed really, since with 
these custom event calls, there's no key provided.

> == Example of continuous query atop remote listeners
> 
> Thinking about how to implement continuous query atop this
> infrastructure I am missing a few things.
> 
> The primary problem is that I don't want to enlist a filter id per
> continuous query I want to run. Not only that but I'd love to be able to
> add a continuous query on the fly and disable it on the fly as well per
> client. For that filters and converters are not flexible enough.
> 
> What is missing is the ability to pass parameters from the client to
> the remote filter and remote converter. Parameters should be provided
> *per client*. Say Client 1 register the continuous query listener with
> "where age > 19" and client 2 registers the CQ listener with "where name
> = emmanuel". The filter knowing for which client it filters, it will be able 
> to only
> return the keys that match the query.

This all sounds a bit like remote code exectution to me? You're asking for the 
client to pass some kind of executable thing that acts as a filter. That's a 
separate feature IMO, which I believe @Tristan is looking into. Once that's in 
place, I'm happy to enhance stuff in the remote event side to support it.

>From a remote event support perspective, server side static filter/converters 
>is as far as I'd like to take it.

> Another useful but not fundamental notion is the ability for a client to
> enlist the same filter id / converter id tuple with different
> parameters. The same client might be interested in several different
> queries.
> 
> BTW have you considered some kind of multiplexing mechanism in case the
> several client listeners on the same client are interested in the same
> event?

If they're in the same client, they'd have the same source ID, so just sending 
it once would be doable, assuming they're not using different converters.

Cheers,

> 
> Emmanuel
> 
> On Thu 2013-12-05 17:16, Galder Zamarreño wrote:
>> Hi all,
>> 
>> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>> 
>> Thanks a lot for the feedback provided in last thread. It was very 
>> constructive feedback :)
>> 
>> I've just finished updating the design document with the feedback provided 
>> in the previous email thread. Can you please have another read and let the 
>> list know what you think of it?
>> 
>> Side note: The scope has got bigger (with the addition of 
>> filters/converters), so we might need to consider whether we want all 
>> features in next version, or whether some parts could be branched out to 
>> next iterations.
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> gal...@redhat.com
>> twitter.com/galderz
>> 
>> Project Lead, Escalante
>> http://escalante.io
>> 
>> Engineer, Infinispan
>> http://infinispan.org
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-de

Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-19 Thread Galder Zamarreño

On Dec 13, 2013, at 4:33 PM, Radim Vansa  wrote:

> On 12/13/2013 03:49 PM, Galder Zamarreño wrote:
>> On Dec 6, 2013, at 4:17 PM, Radim Vansa  wrote:
>> 
>>> Btw., when Hot Rod fails to hit the primary owner, should the non-owner 
>>> propagate the SourceID to primary owner somehow? Or is in this case 
>>> acceptable notifying the listener about its own change?
>> If the call lands in a non-owner, it's probably simpler for the non-owner to 
>> send the notification there and then. ACK information tracking would 
>> probably be distributed, in which case it'd need to deal with potential 
>> failure of the primary owner.
> 
> I don't think I understand that properly. The node responsible for notifying 
> the client is either primary owner, or operation origin (where the write has 
> landed in fact). Previously, we were saying that the responsible node should 
> be the primary owner - now you say that the origin. When the cluster is 
> accessed only remotely, it does not have much performance impact (as these 
> two are mostly the same node), but with cluster in compatibility mode the 
> decision could affect the performance a lot.

Normally, a Hot Rod call would land on the owner of the key, which then is able 
to send the notification itself. The question, as you've rightly asked, is what 
to do when the call lands in a non-owner. Is that because the owner node has 
failed? Is it because the client has a flaky issue with the hash function and 
not all calls are directed to non-owners? Rather than spending time forcing the 
owner to be the one that sends the notification, a non-owner might say: 
something weird happened here, I should have received this call, but just in 
case, I'll send notifications linked to it.

> So, do you think that this should be the origin (easier to implement, with 
> access to distributed ack registry it can retrieve the information, but with 
> higher latency as the ack info is probably affine to the entry itself)
> or primary owner (in this case you'd have to propagate the source ID with the 
> write command).
> Btw., what should be the source ID for operations coming from library mode? 
> JGroups address of the node?

Personally, I think remote eventing is complex enough as it is to add 
compatibility in this first release because there's no event infrastructure for 
memcached nor REST. We should focus on getting remote events right for Hot Rod.

> 
> Radim
> 
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> gal...@redhat.com
>> twitter.com/galderz
>> 
>> Project Lead, Escalante
>> http://escalante.io
>> 
>> Engineer, Infinispan
>> http://infinispan.org
>> 
> 
> 
> -- 
> Radim Vansa 
> JBoss DataGrid QA
> 


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-18 Thread Galder Zamarreño

On Dec 13, 2013, at 3:11 PM, Radim Vansa  wrote:

> On 12/13/2013 02:44 PM, Galder Zamarreño wrote:
>> On Dec 6, 2013, at 10:45 AM, Radim Vansa  wrote:
>> 
>>> Hi,
>>> 
>>> 1) IMO, filtering for specific key is a very important use case. 
>>> Registering a filterId is a very powerful feature, but as long as you don't 
>>> provide runtime parameter for this filter, you cannot implement one-key 
>>> filtering.
>> What do you mean by runtime parameter exactly? Can you give a concrete 
>> example of what you want to achieve that is not possible with what I've 
>> written up?
> 
> As I stressed, if the client wants to listen for events on key_123456, then 
> you can deploy a filter matching key_{number} (and additional constraints) 
> but the 123456 is not known at deployment time.

True, that's a limitation of the current approach, but I don't see it crucial 
as long as we have some static filtering in place. The feature itself is 
already pretty large, so I'd consider this (dynamic filtering) at a later point.

> 
>> 
>>> 2) setting ack/no ack in listener, and then configuring server-wise whether 
>>> you should ack each / only last event sounds weird. I'd replace the boolean 
>>> with enum { NO_ACK, ACK_EACH, ACK_LAST }.
>> Makes a lot of sense, +1.
>> 
>>> 3) should the client provide source id when registering listener or when 
>>> starting RemoteCacheManager? No API for that.
>> Every operation will require a source ID from now on, so clients must 
>> provide it from first operation sent to the server. From a Java client 
>> perspective, you'd have this from the start via the configuration.
>> 
>>> 4) clustered events design does not specify any means to replicating the 
>>> clustered event listener - all it does is that you register the listener on 
>>> one node and the other nodes then route events to this node, until the node 
>>> dies/deregisters the listener. No replication. Please specify, how should 
>>> it piggyback on clustered events, and how should the listener list be 
>>> replicated.
>> In clustered listeners, the other nodes you talk about are gonna need to 
>> know about the clustered listeners so that they route events. Some kind of 
>> information about these clustered listeners will need to be sent around the 
>> cluster. The exact details are probably implementation details but we have a 
>> clustered registry already in place for this kind of things. In any case, 
>> it'd make a lot of sense that both use cases reuse as much as logic in this 
>> area.
> 
> OK, this is probably the desired behaviour, it just is not covered by the 
> Clustered Events design draft. Probably something to add - I'll ping Mircea 
> about that. And you're right that it would make a lot of sense to have shared 
> structure for the listeners, and two implementations of the delivery boy (one 
> to the node where a clustered event has been registered and second to local 
> component handling HotRod clients).
> 
>> 
>>> 5) non-acked events: how exactly do you expect the ack data to be 
>>> replicated, and updated? I see three options:
>>> A) Let non-acked list be a part of the listener record in replicated cache, 
>>> and the primary owner which executes the event should update these via 
>>> delta messages. I guess for proper reliability it should add operation 
>>> record synchronously before confirming the operation to the originator, and 
>>> then it might asynchronously remove it after the ack from client. When a 
>>> node becomes primary owner, it should send events to client for all 
>>> non-acked events.
>>> B) Having the non-acked list attached directly to cache entry (updating it 
>>> together with regular backup), and then asynchronously updating the non-ack 
>>> list after ack comes
>>> C) Separate cache for acks by entry keys, similar to B, consistent hash 
>>> synced with the main entry cache
>> Definitely not B. I don't wanna tie the internal cache entry to the ACKs. 
>> The two should be independent. Either C or A. For C, you'd wished to have a 
>> single cache for all listeners+caches, but you'd have to think about the 
>> keys and to have the same consistent hash, you'd have to have same keys. A 
>> might be better, but you certainly don't want this ACK info in a replicated 
>> structure. You'd want ACKs in a distributed cache preferably, and clustered 
>> listener info in the clustered replicated registry.
> There already is some CH implementation which aims at sharing the same 
> distribution for all caches, SyncConsistentHash. Is there some problem with C 
> and forcing this for the caches? Dan?
> 
> Radim


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-13 Thread Emmanuel Bernard
Much better. Some more feedback.

== Filter and converter

I am wondering if there is a benefit in separating filters and
converters. It does add some understanding complexity so a single
ServerListener with the methods from RemoteFilter / RemoveConverter
might be better.

Should filter / converter impls have a @FilterId() / @ConverterId or
should even the id be configured as late binding?

== Custom event contents

I an not following, is the custom content always received as byte[] by
the client?  Or is the generic parameter K in
RemoteCacheEntryCustomEvent the actual return type of getEventData() ?

I'd love (in Java) to declare the returned type of the converter via
generics in the RemoteConverter impl (class SomeRC extends
RemoteConverter {}) and somehow use that information on
the client side.

== Example of continuous query atop remote listeners

Thinking about how to implement continuous query atop this
infrastructure I am missing a few things.

The primary problem is that I don't want to enlist a filter id per
continuous query I want to run. Not only that but I'd love to be able to
add a continuous query on the fly and disable it on the fly as well per
client. For that filters and converters are not flexible enough.

What is missing is the ability to pass parameters from the client to
the remote filter and remote converter. Parameters should be provided
*per client*. Say Client 1 register the continuous query listener with
"where age > 19" and client 2 registers the CQ listener with "where name
= emmanuel". The filter knowing for which client it filters, it will be able to 
only
return the keys that match the query.

Another useful but not fundamental notion is the ability for a client to
enlist the same filter id / converter id tuple with different
parameters. The same client might be interested in several different
queries.

BTW have you considered some kind of multiplexing mechanism in case the
several client listeners on the same client are interested in the same
event?

Emmanuel

On Thu 2013-12-05 17:16, Galder Zamarreño wrote:
> Hi all,
> 
> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
> 
> Thanks a lot for the feedback provided in last thread. It was very 
> constructive feedback :)
> 
> I've just finished updating the design document with the feedback provided in 
> the previous email thread. Can you please have another read and let the list 
> know what you think of it?
> 
> Side note: The scope has got bigger (with the addition of 
> filters/converters), so we might need to consider whether we want all 
> features in next version, or whether some parts could be branched out to next 
> iterations.
> 
> Cheers,
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
> 
> Project Lead, Escalante
> http://escalante.io
> 
> Engineer, Infinispan
> http://infinispan.org
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-13 Thread Emmanuel Bernard
I think it's a good idea but that's essentially continuous query.
My guts tell me you still want the low level plumbing and actual
imperative code for additional usecases.

On Fri 2013-12-13 15:51, Galder Zamarreño wrote:
> 
> On Dec 9, 2013, at 6:47 PM, Mircea Markus  wrote:
> 
> > Hey Galder,
> > 
> > Another idea that come up today was to use the QueryDSL for specifying both 
> > the filter and the transformer (the DSL has projection).
> > The query DSL builds an HQL string for which one the server side the filter 
> > can be built on the fly (we already do that with the embedded query DSL).
> > There are some nice advantages of doing this: build the filter and the 
> > listener at runtime, in a language independent manner(assuming query DSL is 
> > migrated), with an API customers are already used to. 
> 
> I'll look into that. Thanks for the heads up.
> 
> Cheers,
> 
> > 
> > 
> > On Dec 6, 2013, at 5:38 PM, Dennis Reed  wrote:
> > 
> >> On 12/06/2013 08:52 AM, Mircea Markus wrote:
> >>> Some notes:
> >>> 
> >>> "This means that the Hot Rod protocol will be extended so that operation 
> >>> headers always carry a Source ID field."
> >>> - shall we add a new intelligence level to handle this? Besides reducing 
> >>> the payload, would allow upgrading the java and Cpp clients independently.
> >> 
> >> Instead of a new intelligence level, if the client told the server what 
> >> features it supports when connecting this could be done more fine-grained,
> >> so that a client could support some subset of functionality (instead of 
> >> being forced to implement the specific extentions in one of the 
> >> pre-defined intelligence levels).
> >> 
> >> -Dennis
> >> 
> >>> In one of our discussions, you've also mentioned that you'd want to use 
> >>> the cluster listeners as a foundation for this functionality. That 
> >>> doesn't seem to be the case from the document, or? Not that it's a bad 
> >>> thing, just that I want to clarify the relation between the two. Another 
> >>> way to handle connection management, based on clustered listeners, would 
> >>> be:
> >>> - the node on which the listeners ID hashes is the only one responsible 
> >>> for piggyback notifications to the remote client
> >>> - it creates a cluster listener to be notified on what to send to the 
> >>> client (can make use cluster listener's filtering and transformer 
> >>> capabilities here)
> >>> 
> >>> Comparing the two approaches: this approach reuses some code (not sure 
> >>> how much, we might be able to do that anyway) from the cluster listeners 
> >>> and also reduces the number of connections required between clint and 
> >>> server, but at the cost of performance/network hops. Also the number of 
> >>> connections a client is required to have hasn't been a problem yet.
> >>> 
> >>> One more note on ST: during ST a node might receive the same notification 
> >>> multiple times (from old owner and new owner). I guess it makes sense 
> >>> documenting that?
> >>> 
> >>> On Dec 5, 2013, at 4:16 PM, Galder Zamarreño  wrote:
> >>> 
>  Hi all,
>  
>  Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>  
>  Thanks a lot for the feedback provided in last thread. It was very 
>  constructive feedback :)
>  
>  I've just finished updating the design document with the feedback 
>  provided in the previous email thread. Can you please have another read 
>  and let the list know what you think of it?
>  
>  Side note: The scope has got bigger (with the addition of 
>  filters/converters), so we might need to consider whether we want all 
>  features in next version, or whether some parts could be branched out to 
>  next iterations.
> >>> +1. Can we include the notification ack in the optionals category?
> >>> What about leaving these as the last bit to be implemented? If time 
> >>> allows (not to delay the release) we can add them, otherwise just add 
> >>> them in future iterations?
> >>> 
> >>> 
>  Cheers,
>  --
>  Galder Zamarreño
>  gal...@redhat.com
>  twitter.com/galderz
>  
>  Project Lead, Escalante
>  http://escalante.io
>  
>  Engineer, Infinispan
>  http://infinispan.org
>  
> >>> Cheers,
> >> 
> >> ___
> >> infinispan-dev mailing list
> >> infinispan-dev@lists.jboss.org
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> > 
> > Cheers,
> > -- 
> > Mircea Markus
> > Infinispan lead (www.infinispan.org)
> > 
> > 
> > 
> > 
> > 
> > ___
> > infinispan-dev mailing list
> > infinispan-dev@lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
> 
> Project Lead, Escalante
> http://escalante.io
> 
> Engineer, Infinispan
> http://infinispan.org
> 
> 
> ___
> infinispan-dev maili

Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-13 Thread Radim Vansa
On 12/13/2013 03:49 PM, Galder Zamarreño wrote:
> On Dec 6, 2013, at 4:17 PM, Radim Vansa  wrote:
>
>> Btw., when Hot Rod fails to hit the primary owner, should the non-owner 
>> propagate the SourceID to primary owner somehow? Or is in this case 
>> acceptable notifying the listener about its own change?
> If the call lands in a non-owner, it's probably simpler for the non-owner to 
> send the notification there and then. ACK information tracking would probably 
> be distributed, in which case it'd need to deal with potential failure of the 
> primary owner.

I don't think I understand that properly. The node responsible for 
notifying the client is either primary owner, or operation origin (where 
the write has landed in fact). Previously, we were saying that the 
responsible node should be the primary owner - now you say that the 
origin. When the cluster is accessed only remotely, it does not have 
much performance impact (as these two are mostly the same node), but 
with cluster in compatibility mode the decision could affect the 
performance a lot.
So, do you think that this should be the origin (easier to implement, 
with access to distributed ack registry it can retrieve the information, 
but with higher latency as the ack info is probably affine to the entry 
itself)
or primary owner (in this case you'd have to propagate the source ID 
with the write command).
Btw., what should be the source ID for operations coming from library 
mode? JGroups address of the node?

Radim

>
> Cheers,
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
>
> Project Lead, Escalante
> http://escalante.io
>
> Engineer, Infinispan
> http://infinispan.org
>


-- 
Radim Vansa 
JBoss DataGrid QA

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-13 Thread Galder Zamarreño

On Dec 9, 2013, at 6:47 PM, Mircea Markus  wrote:

> Hey Galder,
> 
> Another idea that come up today was to use the QueryDSL for specifying both 
> the filter and the transformer (the DSL has projection).
> The query DSL builds an HQL string for which one the server side the filter 
> can be built on the fly (we already do that with the embedded query DSL).
> There are some nice advantages of doing this: build the filter and the 
> listener at runtime, in a language independent manner(assuming query DSL is 
> migrated), with an API customers are already used to. 

I'll look into that. Thanks for the heads up.

Cheers,

> 
> 
> On Dec 6, 2013, at 5:38 PM, Dennis Reed  wrote:
> 
>> On 12/06/2013 08:52 AM, Mircea Markus wrote:
>>> Some notes:
>>> 
>>> "This means that the Hot Rod protocol will be extended so that operation 
>>> headers always carry a Source ID field."
>>> - shall we add a new intelligence level to handle this? Besides reducing 
>>> the payload, would allow upgrading the java and Cpp clients independently.
>> 
>> Instead of a new intelligence level, if the client told the server what 
>> features it supports when connecting this could be done more fine-grained,
>> so that a client could support some subset of functionality (instead of 
>> being forced to implement the specific extentions in one of the 
>> pre-defined intelligence levels).
>> 
>> -Dennis
>> 
>>> In one of our discussions, you've also mentioned that you'd want to use the 
>>> cluster listeners as a foundation for this functionality. That doesn't seem 
>>> to be the case from the document, or? Not that it's a bad thing, just that 
>>> I want to clarify the relation between the two. Another way to handle 
>>> connection management, based on clustered listeners, would be:
>>> - the node on which the listeners ID hashes is the only one responsible for 
>>> piggyback notifications to the remote client
>>> - it creates a cluster listener to be notified on what to send to the 
>>> client (can make use cluster listener's filtering and transformer 
>>> capabilities here)
>>> 
>>> Comparing the two approaches: this approach reuses some code (not sure how 
>>> much, we might be able to do that anyway) from the cluster listeners and 
>>> also reduces the number of connections required between clint and server, 
>>> but at the cost of performance/network hops. Also the number of connections 
>>> a client is required to have hasn't been a problem yet.
>>> 
>>> One more note on ST: during ST a node might receive the same notification 
>>> multiple times (from old owner and new owner). I guess it makes sense 
>>> documenting that?
>>> 
>>> On Dec 5, 2013, at 4:16 PM, Galder Zamarreño  wrote:
>>> 
 Hi all,
 
 Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
 
 Thanks a lot for the feedback provided in last thread. It was very 
 constructive feedback :)
 
 I've just finished updating the design document with the feedback provided 
 in the previous email thread. Can you please have another read and let the 
 list know what you think of it?
 
 Side note: The scope has got bigger (with the addition of 
 filters/converters), so we might need to consider whether we want all 
 features in next version, or whether some parts could be branched out to 
 next iterations.
>>> +1. Can we include the notification ack in the optionals category?
>>> What about leaving these as the last bit to be implemented? If time allows 
>>> (not to delay the release) we can add them, otherwise just add them in 
>>> future iterations?
>>> 
>>> 
 Cheers,
 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz
 
 Project Lead, Escalante
 http://escalante.io
 
 Engineer, Infinispan
 http://infinispan.org
 
>>> Cheers,
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> Cheers,
> -- 
> Mircea Markus
> Infinispan lead (www.infinispan.org)
> 
> 
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-13 Thread Galder Zamarreño

On Dec 6, 2013, at 4:17 PM, Radim Vansa  wrote:

> Btw., when Hot Rod fails to hit the primary owner, should the non-owner 
> propagate the SourceID to primary owner somehow? Or is in this case 
> acceptable notifying the listener about its own change?

If the call lands in a non-owner, it's probably simpler for the non-owner to 
send the notification there and then. ACK information tracking would probably 
be distributed, in which case it'd need to deal with potential failure of the 
primary owner.

Cheers,
--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-13 Thread Galder Zamarreño

On Dec 6, 2013, at 3:52 PM, Mircea Markus  wrote:

> Some notes:
> 
> "This means that the Hot Rod protocol will be extended so that operation 
> headers always carry a Source ID field."
> - shall we add a new intelligence level to handle this? Besides reducing the 
> payload, would allow upgrading the java and Cpp clients independently.

Hmmm, not sure about the usability of intelligence level in this context. We 
added that flag to deal with different responses from server, so there's always 
a request first. Independent upgrading is possible today, since the server 
talks earlier protocol versions. So, Java clients could talk protocol version 
1.4 and Cpp clients 1.3. When Cpp clients support all features in 1.4, they can 
start talking that protocol.

Also, the source ID can be any byte array. If clients are not interested in 
registering listeners, they can just send a empty byte[]. When they want to 
register listeners, then it becomes important to have a good source ID.

> In one of our discussions, you've also mentioned that you'd want to use the 
> cluster listeners as a foundation for this functionality. That doesn't seem 
> to be the case from the document, or? Not that it's a bad thing, just that I 
> want to clarify the relation between the two.

In both the clustered listeners and remote listeners, we're gonna need some 
kind of cluster wide information about the listeners. For clustered listeners, 
it's needed in order to route events. For remote listeners, pretty much the 
same thing. If a client C registers a listener L in server S1, and then a 
distributed put arrives in S2, somehow S2 is gonna need to know that it needs 
to send an event remotely. I see both using the clustered registry for this.

The difference between the two is really the communication layer and protocol 
used to send those events. For clustered listeners, you'd send them through 
JGroups, for remote events, they go through Netty, formatted as per Hot Rod 
protocol.

> Another way to handle connection management, based on clustered listeners, 
> would be:
> - the node on which the listeners ID hashes is the only one responsible for 
> piggyback notifications to the remote client
> - it creates a cluster listener to be notified on what to send to the client 
> (can make use cluster listener's filtering and transformer capabilities here)

>From a connection management perspective, that's an interesting idea, but 
>there's a slight mistmatch in the filtering area. As shown in the design 
>document for remote events, the clustered listener's  API receives different 
>information to the callback API for remote events. IOW, for remote events, 
>there's stuff such as source ID of the Hot Rod protocol operation and the 
>source ID for which the event is directed. 

The convertor stuff would work probably as it is, but the callback API you 
wrote for metadata might be a bit limited (no previous value, no previous 
metadata)

> Comparing the two approaches: this approach reuses some code (not sure how 
> much, we might be able to do that anyway) from the cluster listeners and also 
> reduces the number of connections required between clint and server, but at 
> the cost of performance/network hops. Also the number of connections a client 
> is required to have hasn't been a problem yet.
> 
> One more note on ST: during ST a node might receive the same notification 
> multiple times (from old owner and new owner). I guess it makes sense 
> documenting that?

I already mentioned something along those lines in the `Non-`ACK`d events` 
section.

Cheers,

> 
> On Dec 5, 2013, at 4:16 PM, Galder Zamarreño  wrote:
> 
>> Hi all,
>> 
>> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>> 
>> Thanks a lot for the feedback provided in last thread. It was very 
>> constructive feedback :)
>> 
>> I've just finished updating the design document with the feedback provided 
>> in the previous email thread. Can you please have another read and let the 
>> list know what you think of it?
>> 
>> Side note: The scope has got bigger (with the addition of 
>> filters/converters), so we might need to consider whether we want all 
>> features in next version, or whether some parts could be branched out to 
>> next iterations.
> 
> +1. Can we include the notification ack in the optionals category?
> What about leaving these as the last bit to be implemented? If time allows 
> (not to delay the release) we can add them, otherwise just add them in future 
> iterations?
> 
> 
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> gal...@redhat.com
>> twitter.com/galderz
>> 
>> Project Lead, Escalante
>> http://escalante.io
>> 
>> Engineer, Infinispan
>> http://infinispan.org
>> 
> 
> Cheers,
> -- 
> Mircea Markus
> Infinispan lead (www.infinispan.org)
> 
> 
> 
> 


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___

Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-13 Thread Radim Vansa
On 12/13/2013 02:44 PM, Galder Zamarreño wrote:
> On Dec 6, 2013, at 10:45 AM, Radim Vansa  wrote:
>
>> Hi,
>>
>> 1) IMO, filtering for specific key is a very important use case. Registering 
>> a filterId is a very powerful feature, but as long as you don't provide 
>> runtime parameter for this filter, you cannot implement one-key filtering.
> What do you mean by runtime parameter exactly? Can you give a concrete 
> example of what you want to achieve that is not possible with what I've 
> written up?

As I stressed, if the client wants to listen for events on key_123456, 
then you can deploy a filter matching key_{number} (and additional 
constraints) but the 123456 is not known at deployment time.

>
>> 2) setting ack/no ack in listener, and then configuring server-wise whether 
>> you should ack each / only last event sounds weird. I'd replace the boolean 
>> with enum { NO_ACK, ACK_EACH, ACK_LAST }.
> Makes a lot of sense, +1.
>
>> 3) should the client provide source id when registering listener or when 
>> starting RemoteCacheManager? No API for that.
> Every operation will require a source ID from now on, so clients must provide 
> it from first operation sent to the server. From a Java client perspective, 
> you'd have this from the start via the configuration.
>
>> 4) clustered events design does not specify any means to replicating the 
>> clustered event listener - all it does is that you register the listener on 
>> one node and the other nodes then route events to this node, until the node 
>> dies/deregisters the listener. No replication. Please specify, how should it 
>> piggyback on clustered events, and how should the listener list be 
>> replicated.
> In clustered listeners, the other nodes you talk about are gonna need to know 
> about the clustered listeners so that they route events. Some kind of 
> information about these clustered listeners will need to be sent around the 
> cluster. The exact details are probably implementation details but we have a 
> clustered registry already in place for this kind of things. In any case, 
> it'd make a lot of sense that both use cases reuse as much as logic in this 
> area.

OK, this is probably the desired behaviour, it just is not covered by 
the Clustered Events design draft. Probably something to add - I'll ping 
Mircea about that. And you're right that it would make a lot of sense to 
have shared structure for the listeners, and two implementations of the 
delivery boy (one to the node where a clustered event has been 
registered and second to local component handling HotRod clients).

>
>> 5) non-acked events: how exactly do you expect the ack data to be 
>> replicated, and updated? I see three options:
>> A) Let non-acked list be a part of the listener record in replicated cache, 
>> and the primary owner which executes the event should update these via delta 
>> messages. I guess for proper reliability it should add operation record 
>> synchronously before confirming the operation to the originator, and then it 
>> might asynchronously remove it after the ack from client. When a node 
>> becomes primary owner, it should send events to client for all non-acked 
>> events.
>> B) Having the non-acked list attached directly to cache entry (updating it 
>> together with regular backup), and then asynchronously updating the non-ack 
>> list after ack comes
>> C) Separate cache for acks by entry keys, similar to B, consistent hash 
>> synced with the main entry cache
> Definitely not B. I don't wanna tie the internal cache entry to the ACKs. The 
> two should be independent. Either C or A. For C, you'd wished to have a 
> single cache for all listeners+caches, but you'd have to think about the keys 
> and to have the same consistent hash, you'd have to have same keys. A might 
> be better, but you certainly don't want this ACK info in a replicated 
> structure. You'd want ACKs in a distributed cache preferably, and clustered 
> listener info in the clustered replicated registry.
There already is some CH implementation which aims at sharing the same 
distribution for all caches, SyncConsistentHash. Is there some problem 
with C and forcing this for the caches? Dan?

Radim
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-13 Thread Galder Zamarreño

On Dec 6, 2013, at 10:45 AM, Radim Vansa  wrote:

> Hi,
> 
> 1) IMO, filtering for specific key is a very important use case. Registering 
> a filterId is a very powerful feature, but as long as you don't provide 
> runtime parameter for this filter, you cannot implement one-key filtering.

What do you mean by runtime parameter exactly? Can you give a concrete example 
of what you want to achieve that is not possible with what I've written up?

> 2) setting ack/no ack in listener, and then configuring server-wise whether 
> you should ack each / only last event sounds weird. I'd replace the boolean 
> with enum { NO_ACK, ACK_EACH, ACK_LAST }.

Makes a lot of sense, +1.

> 3) should the client provide source id when registering listener or when 
> starting RemoteCacheManager? No API for that.

Every operation will require a source ID from now on, so clients must provide 
it from first operation sent to the server. From a Java client perspective, 
you'd have this from the start via the configuration.

> 4) clustered events design does not specify any means to replicating the 
> clustered event listener - all it does is that you register the listener on 
> one node and the other nodes then route events to this node, until the node 
> dies/deregisters the listener. No replication. Please specify, how should it 
> piggyback on clustered events, and how should the listener list be replicated.

In clustered listeners, the other nodes you talk about are gonna need to know 
about the clustered listeners so that they route events. Some kind of 
information about these clustered listeners will need to be sent around the 
cluster. The exact details are probably implementation details but we have a 
clustered registry already in place for this kind of things. In any case, it'd 
make a lot of sense that both use cases reuse as much as logic in this area.

> 5) non-acked events: how exactly do you expect the ack data to be replicated, 
> and updated? I see three options:
> A) Let non-acked list be a part of the listener record in replicated cache, 
> and the primary owner which executes the event should update these via delta 
> messages. I guess for proper reliability it should add operation record 
> synchronously before confirming the operation to the originator, and then it 
> might asynchronously remove it after the ack from client. When a node becomes 
> primary owner, it should send events to client for all non-acked events.
> B) Having the non-acked list attached directly to cache entry (updating it 
> together with regular backup), and then asynchronously updating the non-ack 
> list after ack comes
> C) Separate cache for acks by entry keys, similar to B, consistent hash 
> synced with the main entry cache

Definitely not B. I don't wanna tie the internal cache entry to the ACKs. The 
two should be independent. Either C or A. For C, you'd wished to have a single 
cache for all listeners+caches, but you'd have to think about the keys and to 
have the same consistent hash, you'd have to have same keys. A might be better, 
but you certainly don't want this ACK info in a replicated structure. You'd 
want ACKs in a distributed cache preferably, and clustered listener info in the 
clustered replicated registry.

Cheers,

> 
> Radim
> 
> 
> On 12/05/2013 05:16 PM, Galder Zamarreño wrote:
>> Hi all,
>> 
>> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>> 
>> Thanks a lot for the feedback provided in last thread. It was very 
>> constructive feedback :)
>> 
>> I've just finished updating the design document with the feedback provided 
>> in the previous email thread. Can you please have another read and let the 
>> list know what you think of it?
>> 
>> Side note: The scope has got bigger (with the addition of 
>> filters/converters), so we might need to consider whether we want all 
>> features in next version, or whether some parts could be branched out to 
>> next iterations.
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> gal...@redhat.com
>> twitter.com/galderz
>> 
>> Project Lead, Escalante
>> http://escalante.io
>> 
>> Engineer, Infinispan
>> http://infinispan.org
>> 
> 
> 
> -- 
> Radim Vansa 
> JBoss DataGrid QA
> 


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-09 Thread Mircea Markus
Hey Galder,

Another idea that come up today was to use the QueryDSL for specifying both the 
filter and the transformer (the DSL has projection).
The query DSL builds an HQL string for which one the server side the filter can 
be built on the fly (we already do that with the embedded query DSL).
There are some nice advantages of doing this: build the filter and the listener 
at runtime, in a language independent manner(assuming query DSL is migrated), 
with an API customers are already used to. 


On Dec 6, 2013, at 5:38 PM, Dennis Reed  wrote:

> On 12/06/2013 08:52 AM, Mircea Markus wrote:
>> Some notes:
>> 
>> "This means that the Hot Rod protocol will be extended so that operation 
>> headers always carry a Source ID field."
>> - shall we add a new intelligence level to handle this? Besides reducing the 
>> payload, would allow upgrading the java and Cpp clients independently.
> 
> Instead of a new intelligence level, if the client told the server what 
> features it supports when connecting this could be done more fine-grained,
> so that a client could support some subset of functionality (instead of 
> being forced to implement the specific extentions in one of the 
> pre-defined intelligence levels).
> 
> -Dennis
> 
>> In one of our discussions, you've also mentioned that you'd want to use the 
>> cluster listeners as a foundation for this functionality. That doesn't seem 
>> to be the case from the document, or? Not that it's a bad thing, just that I 
>> want to clarify the relation between the two. Another way to handle 
>> connection management, based on clustered listeners, would be:
>> - the node on which the listeners ID hashes is the only one responsible for 
>> piggyback notifications to the remote client
>> - it creates a cluster listener to be notified on what to send to the client 
>> (can make use cluster listener's filtering and transformer capabilities here)
>> 
>> Comparing the two approaches: this approach reuses some code (not sure how 
>> much, we might be able to do that anyway) from the cluster listeners and 
>> also reduces the number of connections required between clint and server, 
>> but at the cost of performance/network hops. Also the number of connections 
>> a client is required to have hasn't been a problem yet.
>> 
>> One more note on ST: during ST a node might receive the same notification 
>> multiple times (from old owner and new owner). I guess it makes sense 
>> documenting that?
>> 
>> On Dec 5, 2013, at 4:16 PM, Galder Zamarreño  wrote:
>> 
>>> Hi all,
>>> 
>>> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>>> 
>>> Thanks a lot for the feedback provided in last thread. It was very 
>>> constructive feedback :)
>>> 
>>> I've just finished updating the design document with the feedback provided 
>>> in the previous email thread. Can you please have another read and let the 
>>> list know what you think of it?
>>> 
>>> Side note: The scope has got bigger (with the addition of 
>>> filters/converters), so we might need to consider whether we want all 
>>> features in next version, or whether some parts could be branched out to 
>>> next iterations.
>> +1. Can we include the notification ack in the optionals category?
>> What about leaving these as the last bit to be implemented? If time allows 
>> (not to delay the release) we can add them, otherwise just add them in 
>> future iterations?
>> 
>> 
>>> Cheers,
>>> --
>>> Galder Zamarreño
>>> gal...@redhat.com
>>> twitter.com/galderz
>>> 
>>> Project Lead, Escalante
>>> http://escalante.io
>>> 
>>> Engineer, Infinispan
>>> http://infinispan.org
>>> 
>> Cheers,
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

Cheers,
-- 
Mircea Markus
Infinispan lead (www.infinispan.org)





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-06 Thread Dennis Reed
On 12/06/2013 08:52 AM, Mircea Markus wrote:
> Some notes:
>
> "This means that the Hot Rod protocol will be extended so that operation 
> headers always carry a Source ID field."
> - shall we add a new intelligence level to handle this? Besides reducing the 
> payload, would allow upgrading the java and Cpp clients independently.

Instead of a new intelligence level, if the client told the server what 
features it supports when connecting this could be done more fine-grained,
so that a client could support some subset of functionality (instead of 
being forced to implement the specific extentions in one of the 
pre-defined intelligence levels).

-Dennis

> In one of our discussions, you've also mentioned that you'd want to use the 
> cluster listeners as a foundation for this functionality. That doesn't seem 
> to be the case from the document, or? Not that it's a bad thing, just that I 
> want to clarify the relation between the two. Another way to handle 
> connection management, based on clustered listeners, would be:
> - the node on which the listeners ID hashes is the only one responsible for 
> piggyback notifications to the remote client
> - it creates a cluster listener to be notified on what to send to the client 
> (can make use cluster listener's filtering and transformer capabilities here)
>
> Comparing the two approaches: this approach reuses some code (not sure how 
> much, we might be able to do that anyway) from the cluster listeners and also 
> reduces the number of connections required between clint and server, but at 
> the cost of performance/network hops. Also the number of connections a client 
> is required to have hasn't been a problem yet.
>
> One more note on ST: during ST a node might receive the same notification 
> multiple times (from old owner and new owner). I guess it makes sense 
> documenting that?
>
> On Dec 5, 2013, at 4:16 PM, Galder Zamarreño  wrote:
>
>> Hi all,
>>
>> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>>
>> Thanks a lot for the feedback provided in last thread. It was very 
>> constructive feedback :)
>>
>> I've just finished updating the design document with the feedback provided 
>> in the previous email thread. Can you please have another read and let the 
>> list know what you think of it?
>>
>> Side note: The scope has got bigger (with the addition of 
>> filters/converters), so we might need to consider whether we want all 
>> features in next version, or whether some parts could be branched out to 
>> next iterations.
> +1. Can we include the notification ack in the optionals category?
> What about leaving these as the last bit to be implemented? If time allows 
> (not to delay the release) we can add them, otherwise just add them in future 
> iterations?
>
>
>> Cheers,
>> --
>> Galder Zamarreño
>> gal...@redhat.com
>> twitter.com/galderz
>>
>> Project Lead, Escalante
>> http://escalante.io
>>
>> Engineer, Infinispan
>> http://infinispan.org
>>
> Cheers,

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-06 Thread Radim Vansa
On 12/06/2013 03:52 PM, Mircea Markus wrote:
> Some notes:
>
> "This means that the Hot Rod protocol will be extended so that operation 
> headers always carry a Source ID field."
> - shall we add a new intelligence level to handle this? Besides reducing the 
> payload, would allow upgrading the java and Cpp clients independently.
Is a new intelligence level required? Protocol 1.4 should specify a bit 
flag "carrying SourceID/not carrying SourceID". If the client is dumb, 
it simply leaves the flag to default and as it won't register any 
listener, the server wont bother him with any new information. If it 
starts registering listeners, the server should start sending events. We 
may speak about it as L4, if you wish, but there's no need to add new 
intelligence level to the protocol.

Btw., when Hot Rod fails to hit the primary owner, should the non-owner 
propagate the SourceID to primary owner somehow? Or is in this case 
acceptable notifying the listener about its own change?

Radim
>
> In one of our discussions, you've also mentioned that you'd want to use the 
> cluster listeners as a foundation for this functionality. That doesn't seem 
> to be the case from the document, or? Not that it's a bad thing, just that I 
> want to clarify the relation between the two. Another way to handle 
> connection management, based on clustered listeners, would be:
> - the node on which the listeners ID hashes is the only one responsible for 
> piggyback notifications to the remote client
> - it creates a cluster listener to be notified on what to send to the client 
> (can make use cluster listener's filtering and transformer capabilities here)
>
> Comparing the two approaches: this approach reuses some code (not sure how 
> much, we might be able to do that anyway) from the cluster listeners and also 
> reduces the number of connections required between clint and server, but at 
> the cost of performance/network hops. Also the number of connections a client 
> is required to have hasn't been a problem yet.
>
> One more note on ST: during ST a node might receive the same notification 
> multiple times (from old owner and new owner). I guess it makes sense 
> documenting that?
>
> On Dec 5, 2013, at 4:16 PM, Galder Zamarreño  wrote:
>
>> Hi all,
>>
>> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>>
>> Thanks a lot for the feedback provided in last thread. It was very 
>> constructive feedback :)
>>
>> I've just finished updating the design document with the feedback provided 
>> in the previous email thread. Can you please have another read and let the 
>> list know what you think of it?
>>
>> Side note: The scope has got bigger (with the addition of 
>> filters/converters), so we might need to consider whether we want all 
>> features in next version, or whether some parts could be branched out to 
>> next iterations.
> +1. Can we include the notification ack in the optionals category?
> What about leaving these as the last bit to be implemented? If time allows 
> (not to delay the release) we can add them, otherwise just add them in future 
> iterations?
>
>
>> Cheers,
>> --
>> Galder Zamarreño
>> gal...@redhat.com
>> twitter.com/galderz
>>
>> Project Lead, Escalante
>> http://escalante.io
>>
>> Engineer, Infinispan
>> http://infinispan.org
>>
> Cheers,


-- 
Radim Vansa 
JBoss DataGrid QA

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-06 Thread Mircea Markus
Some notes:

"This means that the Hot Rod protocol will be extended so that operation 
headers always carry a Source ID field."
- shall we add a new intelligence level to handle this? Besides reducing the 
payload, would allow upgrading the java and Cpp clients independently.

In one of our discussions, you've also mentioned that you'd want to use the 
cluster listeners as a foundation for this functionality. That doesn't seem to 
be the case from the document, or? Not that it's a bad thing, just that I want 
to clarify the relation between the two. Another way to handle connection 
management, based on clustered listeners, would be:
- the node on which the listeners ID hashes is the only one responsible for 
piggyback notifications to the remote client
- it creates a cluster listener to be notified on what to send to the client 
(can make use cluster listener's filtering and transformer capabilities here)

Comparing the two approaches: this approach reuses some code (not sure how 
much, we might be able to do that anyway) from the cluster listeners and also 
reduces the number of connections required between clint and server, but at the 
cost of performance/network hops. Also the number of connections a client is 
required to have hasn't been a problem yet.

One more note on ST: during ST a node might receive the same notification 
multiple times (from old owner and new owner). I guess it makes sense 
documenting that?

On Dec 5, 2013, at 4:16 PM, Galder Zamarreño  wrote:

> Hi all,
> 
> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
> 
> Thanks a lot for the feedback provided in last thread. It was very 
> constructive feedback :)
> 
> I've just finished updating the design document with the feedback provided in 
> the previous email thread. Can you please have another read and let the list 
> know what you think of it?
> 
> Side note: The scope has got bigger (with the addition of 
> filters/converters), so we might need to consider whether we want all 
> features in next version, or whether some parts could be branched out to next 
> iterations.

+1. Can we include the notification ack in the optionals category?
What about leaving these as the last bit to be implemented? If time allows (not 
to delay the release) we can add them, otherwise just add them in future 
iterations?


> 
> Cheers,
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
> 
> Project Lead, Escalante
> http://escalante.io
> 
> Engineer, Infinispan
> http://infinispan.org
> 

Cheers,
-- 
Mircea Markus
Infinispan lead (www.infinispan.org)





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design of Remote Hot Rod events - round 2

2013-12-06 Thread Radim Vansa
Hi,

1) IMO, filtering for specific key is a very important use case. 
Registering a filterId is a very powerful feature, but as long as you 
don't provide runtime parameter for this filter, you cannot implement 
one-key filtering.

2) setting ack/no ack in listener, and then configuring server-wise 
whether you should ack each / only last event sounds weird. I'd replace 
the boolean with enum { NO_ACK, ACK_EACH, ACK_LAST }.

3) should the client provide source id when registering listener or when 
starting RemoteCacheManager? No API for that.

4) clustered events design does not specify any means to replicating the 
clustered event listener - all it does is that you register the listener 
on one node and the other nodes then route events to this node, until 
the node dies/deregisters the listener. No replication. Please specify, 
how should it piggyback on clustered events, and how should the listener 
list be replicated.

5) non-acked events: how exactly do you expect the ack data to be 
replicated, and updated? I see three options:
A) Let non-acked list be a part of the listener record in replicated 
cache, and the primary owner which executes the event should update 
these via delta messages. I guess for proper reliability it should add 
operation record synchronously before confirming the operation to the 
originator, and then it might asynchronously remove it after the ack 
from client. When a node becomes primary owner, it should send events to 
client for all non-acked events.
B) Having the non-acked list attached directly to cache entry (updating 
it together with regular backup), and then asynchronously updating the 
non-ack list after ack comes
C) Separate cache for acks by entry keys, similar to B, consistent hash 
synced with the main entry cache

Radim


On 12/05/2013 05:16 PM, Galder Zamarreño wrote:
> Hi all,
>
> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>
> Thanks a lot for the feedback provided in last thread. It was very 
> constructive feedback :)
>
> I've just finished updating the design document with the feedback provided in 
> the previous email thread. Can you please have another read and let the list 
> know what you think of it?
>
> Side note: The scope has got bigger (with the addition of 
> filters/converters), so we might need to consider whether we want all 
> features in next version, or whether some parts could be branched out to next 
> iterations.
>
> Cheers,
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
>
> Project Lead, Escalante
> http://escalante.io
>
> Engineer, Infinispan
> http://infinispan.org
>


-- 
Radim Vansa 
JBoss DataGrid QA

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev