Re: [infinispan-dev] Infinispan client/server architecture based on gRPC

2018-05-30 Thread Adrian Nistor
Yest, the client needs that hash but that does not necessarily mean it 
has to compute it itself.
The hash should be applied to the storage format which might be 
different from the format the client sees. So hash computation could be 
done on the server, just a thought.

On 05/30/2018 02:47 PM, Radim Vansa wrote:
> On 05/30/2018 12:46 PM, Adrian Nistor wrote:
>> Thanks for clarifying this Galder.
>> Yes, the network layer is indeed the culprit and the purpose of this
>> experiment.
>>
>> What is the approach you envision regarding the IDL? Should we strive
>> for a pure IDL definition of the service? That could be an interesting
>> approach that would make it possible for a third party to generate
>> their own infinispan grpc client in any new language that we do not
>> already offer support, just based on the IDL. And maybe using a
>> different grpc implementation if they do not find suitable the one
>> from google.
>>
>> I was not suggesting we should do type transformation or anything on
>> the client side that would require an extra layer of code on top of
>> what grpc generates for the client, so maybe a pure IDL based service
>> definition would indeed be possible, without extra helpers. No type
>> transformation, just type information. Exposing the type info that
>> comes from the server would be enough, a lot better than dumbing
>> everything down to a byte[].
> I may be wrong but key transformation on client is necessary for correct
> hash-aware routing, isn't it? We need to get byte array for each key and
> apply murmur hash there (IIUC even when we use protobuf as the storage
> format, segment is based on the raw protobuf bytes, right?).
>
> Radim
>
>> Adrian
>>
>> On 05/30/2018 12:16 PM, Galder Zamarreno wrote:
>>> On Tue, May 29, 2018 at 8:57 PM Adrian Nistor >> <mailto:anis...@redhat.com>> wrote:
>>>
>>>  Vittorio, a few remarks regarding your statement "...The
>>>  alternative to this is to develop a protostream equivalent for
>>>  each supported language and it doesn't seem really feasible to me."
>>>
>>>  No way! That's a big misunderstanding. We do not need to
>>>  re-implement the protostream library in C/C++/C# or any new
>>>  supported language.
>>>  Protostream is just for Java and it is compatible with Google's
>>>  protobuf lib we already use in the other clients. We can continue
>>>  using Google's protobuf lib for these clients, with or without gRPC.
>>>  Protostream does not handle protobuf services as gRPC does, but
>>>  we can add support for that with little effort.
>>>
>>>  The real problem here is if we want to replace our hot rod
>>>  invocation protocol with gRPC to save on the effort of
>>>  implementing and maintaining hot rod in all those clients. I
>>>  wonder why the obvious question is being avoided in this thread.
>>>
>>>
>>> ^ It is not being avoided. I stated it quite clearly when I replied
>>> but maybe not with enough detail. So, I said:
>>>
>>>>   The biggest problem I see in our client/server architecture is the
>>> ability to quickly deliver features/APIs across multiple language
>>> clients. Both Vittorio and I have seen how long it takes to implement
>>> all the different features available in Java client and port them to
>>> Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to
>>> improve on that by having some of that work done for us. Granted, not
>>> all of it will be done, but it should give us some good foundations
>>> on which to build.
>>>
>>> To expand on it a bit further: the reason it takes us longer to get
>>> different features in is because each client implements its own
>>> network layer, parses the protocol and does type transformations
>>> (between byte[] and whatever the client expects).
>>>
>>> IMO, the most costly things there are getting the network layer right
>>> (from experience with Node.js, it has taken a while to do so) and
>>> parsing work (not only parsing itself, but doing it in a efficient
>>> way). Network layer also includes load balancing, failover, cluster
>>> failover...etc.
>>>
>>>  From past experience, transforming from byte[] to what the client
>>> expects has never really been very problematic for me. What's been
>>> difficult here is coming up with encoding architecture that Gustavo
>>> lead, whose aim was to improve on the i

Re: [infinispan-dev] Infinispan client/server architecture based on gRPC

2018-05-30 Thread Adrian Nistor

Thanks for clarifying this Galder.
Yes, the network layer is indeed the culprit and the purpose of this 
experiment.


What is the approach you envision regarding the IDL? Should we strive 
for a pure IDL definition of the service? That could be an interesting 
approach that would make it possible for a third party to generate their 
own infinispan grpc client in any new language that we do not already 
offer support, just based on the IDL. And maybe using a different grpc 
implementation if they do not find suitable the one from google.


I was not suggesting we should do type transformation or anything on the 
client side that would require an extra layer of code on top of what 
grpc generates for the client, so maybe a pure IDL based service 
definition would indeed be possible, without extra helpers. No type 
transformation, just type information. Exposing the type info that comes 
from the server would be enough, a lot better than dumbing everything 
down to a byte[].


Adrian

On 05/30/2018 12:16 PM, Galder Zamarreno wrote:
On Tue, May 29, 2018 at 8:57 PM Adrian Nistor <mailto:anis...@redhat.com>> wrote:


Vittorio, a few remarks regarding your statement "...The
alternative to this is to develop a protostream equivalent for
each supported language and it doesn't seem really feasible to me."

No way! That's a big misunderstanding. We do not need to
re-implement the protostream library in C/C++/C# or any new
supported language.
Protostream is just for Java and it is compatible with Google's
protobuf lib we already use in the other clients. We can continue
using Google's protobuf lib for these clients, with or without gRPC.
Protostream does not handle protobuf services as gRPC does, but we
can add support for that with little effort.

The real problem here is if we want to replace our hot rod
invocation protocol with gRPC to save on the effort of
implementing and maintaining hot rod in all those clients. I
wonder why the obvious question is being avoided in this thread.


^ It is not being avoided. I stated it quite clearly when I replied 
but maybe not with enough detail. So, I said:


> The biggest problem I see in our client/server architecture is the 
ability to quickly deliver features/APIs across multiple language 
clients. Both Vittorio and I have seen how long it takes to implement 
all the different features available in Java client and port them to 
Node.js, C/C++/C#...etc. This effort lead by Vittorio is trying to 
improve on that by having some of that work done for us. Granted, not 
all of it will be done, but it should give us some good foundations on 
which to build.


To expand on it a bit further: the reason it takes us longer to get 
different features in is because each client implements its own 
network layer, parses the protocol and does type transformations 
(between byte[] and whatever the client expects).


IMO, the most costly things there are getting the network layer right 
(from experience with Node.js, it has taken a while to do so) and 
parsing work (not only parsing itself, but doing it in a efficient 
way). Network layer also includes load balancing, failover, cluster 
failover...etc.


From past experience, transforming from byte[] to what the client 
expects has never really been very problematic for me. What's been 
difficult here is coming up with encoding architecture that Gustavo 
lead, whose aim was to improve on the initial compatibility mode. But, 
with that now clear, understood and proven to solve our issues, the 
rest in this area should be fairly straightforward IMO.


Type transformation, once done, is a constant. As we add more Hot Rod 
operations, it's mostly the parsing that starts to become more work. 
Network can also become more work if instead of RPC commands you start 
supporting streams based commands.


gRPC solves the network (FYI: with key as HTTP header and 
SubchannelPicker you can do hash-aware routing) and parsing for us. I 
don't see the need for it to solve our type transformations for us. If 
it does it, great, but does it support our compatibility requirements? 
(I had already told Vittorio to check Gustavo on this). Type 
transformation is a lower prio for me, network and parsing are more 
important.


Hope this clarifies better my POV.

Cheers



Adrian


On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote:

Thanks Adrian,

of course there's a marshalling work under the cover and that is
reflected into the generated code (specially the accessor methods
generated from the oneof clause).

My opinion is that on the client side this could be accepted, as
long as the API are well defined and documented: application
developer can build an adhoc decorator on the top if needed. The
alternative to this is to develop a protostream equivalent for
each supported language and it doesn't seem really feasible to me.

On the 

Re: [infinispan-dev] Infinispan client/server architecture based on gRPC

2018-05-30 Thread Adrian Nistor
Fair point. That's why protobuf's Any has a type url inside, exactly for 
such flexibility : 
https://github.com/google/protobuf/blob/master/src/google/protobuf/any.proto#L150

Well, it's not a mime-type as per infinspan but close enough.

On 05/30/2018 01:22 PM, Gustavo Fernandes wrote:


On Wed, May 30, 2018 at 10:56 AM, Adrian Nistor <mailto:anis...@redhat.com>> wrote:


The oneof and WrappedMessage solve the same problem but in a
different way.
Oneof has the nasty effect that in ties the service model to the
user data model.



The user data model is only "static" at storage level (guided by 
configuration), and the user data can travel on the wire in any format 
the user wants [1]


[1] 
https://github.com/infinispan/infinispan/blob/master/client/hotrod-client/src/test/java/org/infinispan/client/hotrod/transcoding/DataFormatTest.java#L109


So better not to assume it will be marshalled and unmarshalled in a 
specific way.


Even if it seems like just one more line of code to add when a new
user type is introduced, it is one line of code in the wrong place
because you'll have to re-generate the service. IE user run protoc
again on OUR IDLs. Should a user do that? This coupling between
the infinispan's service model and the user's data model bothers me.

WrappedMessage is just a wrapper around an array of bytes +
information regarding what message type or what scalar type is in
there. Something very similar to a VARIANT [1]. The reason it is
needed is explained here [2].

You are correct, this is not a gRPC limitation, it is a by-design
protobuf protocol limitation, that was very thoughtfully
introduced to reduce wire level bandwitdth for the common case
where types are static. Unfortunately it leaves generic/dynamic
types in mid-air. But it is fairly easy to solve, as you can see
with WrappedMessage. At the time I introduced WrappedMessage we
were using protobuf 2.

protobuf 3 introduces type Any, which solves the issue in a
similar way with WrappedMessage. The difference is Any seems to
have been created to wrap either a plain byte[] or a message type
that has been marshalled to a byte[]. No support for scalars in
sight. Can we solve that? Sure, put a WrappedMessage inside that
byte[] : That is the reason I did not jump immediately at
using Any and stayed with WrappedMessage.

Can a 150 lines PoC be a proposal for the ISPN object model? No,
but we need to explore the pain points of gRPC and protobuf that
are relevant to our usage, and this thing with genericly typed
services is one of them.
I think we already have a good solution in sight, before giving up
and going with byte[] for key and value as it was suggested
earlier here. I can make a PR to the grpc PoC to show it by the
end of the week.

Adrian

[1] https://en.wikipedia.org/wiki/Variant_type
<https://en.wikipedia.org/wiki/Variant_type>
[2]
https://developers.google.com/protocol-buffers/docs/techniques#streaming
<https://developers.google.com/protocol-buffers/docs/techniques#streaming>



On 05/30/2018 11:34 AM, Vittorio Rigamonti wrote:



On Tue, May 29, 2018 at 8:59 PM, Adrian Nistor
mailto:anis...@redhat.com>> wrote:

So you assume the two are separate, Emmanuel. So do I.

But in the current PoC the user data model is directly
referenced by the service model interface (KeyMsg and
ValueMsg are oneofs listing all possible user application
types???). I was assuming this hard dependency was there just
to make things simple for the scope of the PoC. But let's not
make this too simple because it will stop being useful. My
expectation is to see a generic yet fully typed 'cache
service' interface that does not depend on the key and value
types that come from userland, using maybe
'google.protobuf.Any' or our own 'WrappedMessage' type
instead. I'm not sure what to believe now because discussing
my hopes and assumptions on the gRPC topic on zulip I think I
understood the opposite is desired. Vittorio, please comment
on this.


Yep that was my design choice. Well my first goal was to keep the
framework language independent: to reach that I tried to define
in grpc/protobuf as much as possible (that's why I didn't use the
Any clause). Then I realized that with very little effort I could
design a framework that works only with user data from the user
side to the cache storage and I'd  liked to investigate this,
manly for two reasons:

- from the user point of view I like the idea that I can found my
objects types in the cache
- the embeddedCache is transparently exposed

but this is my 150 lines of code grpc server prototype, not a
proposal for the ISPN object model. Howe

Re: [infinispan-dev] Infinispan client/server architecture based on gRPC

2018-05-30 Thread Adrian Nistor

The oneof and WrappedMessage solve the same problem but in a different way.
Oneof has the nasty effect that in ties the service model to the user 
data model. Even if it seems like just one more line of code to add when 
a new user type is introduced, it is one line of code in the wrong place 
because you'll have to re-generate the service. IE user run protoc again 
on OUR IDLs. Should a user do that? This coupling between the 
infinispan's service model and the user's data model bothers me.


WrappedMessage is just a wrapper around an array of bytes + information 
regarding what message type or what scalar type is in there. Something 
very similar to a VARIANT [1]. The reason it is needed is explained here 
[2].


You are correct, this is not a gRPC limitation, it is a by-design 
protobuf protocol limitation, that was very thoughtfully introduced to 
reduce wire level bandwitdth for the common case where types are static. 
Unfortunately it leaves generic/dynamic types in mid-air. But it is 
fairly easy to solve, as you can see with WrappedMessage. At the time I 
introduced WrappedMessage we were using protobuf 2.


protobuf 3 introduces type Any, which solves the issue in a similar way 
with WrappedMessage. The difference is Any seems to have been created to 
wrap either a plain byte[] or a message type that has been marshalled to 
a byte[]. No support for scalars in sight. Can we solve that? Sure, put 
a WrappedMessage inside that byte[] : That is the reason I did not 
jump immediately at using Any and stayed with WrappedMessage.


Can a 150 lines PoC be a proposal for the ISPN object model? No, but we 
need to explore the pain points of gRPC and protobuf that are relevant 
to our usage, and this thing with genericly typed services is one of them.
I think we already have a good solution in sight, before giving up and 
going with byte[] for key and value as it was suggested earlier here. I 
can make a PR to the grpc PoC to show it by the end of the week.


Adrian

[1] https://en.wikipedia.org/wiki/Variant_type
[2] https://developers.google.com/protocol-buffers/docs/techniques#streaming

On 05/30/2018 11:34 AM, Vittorio Rigamonti wrote:



On Tue, May 29, 2018 at 8:59 PM, Adrian Nistor <mailto:anis...@redhat.com>> wrote:


So you assume the two are separate, Emmanuel. So do I.

But in the current PoC the user data model is directly referenced
by the service model interface (KeyMsg and ValueMsg are oneofs
listing all possible user application types???). I was assuming
this hard dependency was there just to make things simple for the
scope of the PoC. But let's not make this too simple because it
will stop being useful. My expectation is to see a generic yet
fully typed 'cache service' interface that does not depend on the
key and value types that come from userland, using maybe
'google.protobuf.Any' or our own 'WrappedMessage' type instead.
I'm not sure what to believe now because discussing my hopes and
assumptions on the gRPC topic on zulip I think I understood the
opposite is desired.  Vittorio, please comment on this.


Yep that was my design choice. Well my first goal was to keep the 
framework language independent: to reach that I tried to define in 
grpc/protobuf as much as possible (that's why I didn't use the Any 
clause). Then I realized that with very little effort I could design a 
framework that works only with user data from the user side to the 
cache storage and I'd  liked to investigate this, manly for two reasons:


- from the user point of view I like the idea that I can found my 
objects types in the cache

- the embeddedCache is transparently exposed

but this is my 150 lines of code grpc server prototype, not a proposal 
for the ISPN object model. However it's ok to use it as starting point 
for a wider discussion



I'm still hoping we want to keep the service interface generic and
separated from the user model. And if we do it, would you expect
to be able to marshall the service call using gRPC lib and at the
same time be able to marshall the user model using whatever other
library? Would be nice but that seems to be a no-no with gRPC, or
I did not search deep enough. I only looked at the java
implementation anyway. It seems to be forcing you to go with
protoc generated code and protobuf-java.jar all the way, for
marshalling both the service and its arguments. And this goes
infinitely deeper. If a service argument of type A has a nested
field of type B and the marshaller for A is generated with
protobuf-java then so is B. Using oneofs or type 'Any' still do
not save you from this. The only escape is to pretend the user
payload is of type 'bytes'. At that point you are left to do your
marshaling to and from bytes yourself. And you are also left with
the question, what the heck is the contents of that byte array
next time you unmarshall it, which is 

Re: [infinispan-dev] Infinispan client/server architecture based on gRPC

2018-05-29 Thread Adrian Nistor

So you assume the two are separate, Emmanuel. So do I.

But in the current PoC the user data model is directly referenced by the 
service model interface (KeyMsg and ValueMsg are oneofs listing all 
possible user application types???). I was assuming this hard dependency 
was there just to make things simple for the scope of the PoC. But let's 
not make this too simple because it will stop being useful. My 
expectation is to see a generic yet fully typed 'cache service' 
interface that does not depend on the key and value types that come from 
userland, using maybe 'google.protobuf.Any' or our own 'WrappedMessage' 
type instead. I'm not sure what to believe now because discussing my 
hopes and assumptions on the gRPC topic on zulip I think I understood 
the opposite is desired.  Vittorio, please comment on this.


I'm still hoping we want to keep the service interface generic and 
separated from the user model. And if we do it, would you expect to be 
able to marshall the service call using gRPC lib and at the same time be 
able to marshall the user model using whatever other library? Would be 
nice but that seems to be a no-no with gRPC, or I did not search deep 
enough. I only looked at the java implementation anyway. It seems to be 
forcing you to go with protoc generated code and protobuf-java.jar all 
the way, for marshalling both the service and its arguments. And this 
goes infinitely deeper. If a service argument of type A has a nested 
field of type B and the marshaller for A is generated with protobuf-java 
then so is B. Using oneofs or type 'Any' still do not save you from 
this.  The only escape is to pretend the user payload is of type 
'bytes'. At that point you are left to do your marshaling to and from 
bytes yourself. And you are also left with the question, what the heck 
is the contents of that byte array next time you unmarshall it, which is 
currently answered by WrappedMessage.


So the more I look at gRPC it seems elegant for most purposes but 
lacking for ours. And again, as with protocol buffers, the wire protocol 
and the IDL are really nice. It is the implementation that is lacking, IMHO.


I think to be really on the same page we should first make a clear 
statement of what we intend to achieve here in a bit more detail. Also, 
since this is not a clean slate effort, we should think right from the 
start what are the expected interactions with existing code base, like 
what are we willing to sacrifice. Somebody mention hot rod please!


Adrian


On 05/29/2018 07:20 PM, Emmanuel Bernard wrote:
Right. Here we are talking about a gRPC representation of the client 
server interactions. Not the data schema stored in ISPN. In that 
model, the API is compiled by us and handed over as a package.


On 29 May 2018, at 15:51, Sanne Grinovero <mailto:sa...@infinispan.org>> wrote:





On 29 May 2018 at 13:45, Vittorio Rigamonti <mailto:vriga...@redhat.com>> wrote:


Thanks Adrian,

of course there's a marshalling work under the cover and that is
reflected into the generated code (specially the accessor methods
generated from the oneof clause).

My opinion is that on the client side this could be accepted, as
long as the API are well defined and documented: application
developer can build an adhoc decorator on the top if needed. The
alternative to this is to develop a protostream equivalent for
each supported language and it doesn't seem really feasible to me.


​This might indeed be reasonable for some developers, some languages.

Just please make sure it's not the only option, as many other 
developers will not expect to need a compiler at hand in various 
stages of the application lifecycle.


For example when deploying a JPA model into an appserver, or just 
booting Hibernate in JavaSE as well, there is a strong expectation 
that we'll be able - at runtime - to inspect the listed Java POJOs 
via reflection and automatically generate whatever Infinispan will need.


Perhaps a key differentiator is between invoking Infinispan APIs 
(RPC) vs defining the object models and related CODECs for keys, 
values, streams and query results? It might get a bit more fuzzy to 
differentiate them for custom functions but I guess we can draw a 
line somewhere.


Thanks,
Sanne


On the server side (java only) the situation is different:
protobuf is optimized for streaming not for storing so probably a
Protostream layer is needed.

On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor
mailto:anis...@redhat.com>> wrote:

Hi Vittorio,
thanks for exploring gRPC. It seems like a very elegant
solution for exposing services. I'll have a look at your PoC
soon.

I feel there are some remarks that need to be made regarding
gRPC. gRPC is just some nice cheesy topping on top of
protobuf. Google's implementation of protobuf, to be more
precise.
It does not need

Re: [infinispan-dev] Infinispan client/server architecture based on gRPC

2018-05-29 Thread Adrian Nistor
Vittorio, a few remarks regarding your statement "...The alternative to 
this is to develop a protostream equivalent for each supported language 
and it doesn't seem really feasible to me."


No way! That's a big misunderstanding. We do not need to re-implement 
the protostream library in C/C++/C# or any new supported language.
Protostream is just for Java and it is compatible with Google's protobuf 
lib we already use in the other clients. We can continue using Google's 
protobuf lib for these clients, with or without gRPC.
Protostream does not handle protobuf services as gRPC does, but we can 
add support for that with little effort.


The real problem here is if we want to replace our hot rod invocation 
protocol with gRPC to save on the effort of implementing and maintaining 
hot rod in all those clients. I wonder why the obvious question is being 
avoided in this thread.


Adrian

On 05/29/2018 03:45 PM, Vittorio Rigamonti wrote:

Thanks Adrian,

of course there's a marshalling work under the cover and that is 
reflected into the generated code (specially the accessor methods 
generated from the oneof clause).


My opinion is that on the client side this could be accepted, as long 
as the API are well defined and documented: application developer can 
build an adhoc decorator on the top if needed. The alternative to this 
is to develop a protostream equivalent for each supported language and 
it doesn't seem really feasible to me.


On the server side (java only) the situation is different: protobuf is 
optimized for streaming not for storing so probably a Protostream 
layer is needed.


On Mon, May 28, 2018 at 4:47 PM, Adrian Nistor <mailto:anis...@redhat.com>> wrote:


Hi Vittorio,
thanks for exploring gRPC. It seems like a very elegant solution
for exposing services. I'll have a look at your PoC soon.

I feel there are some remarks that need to be made regarding gRPC.
gRPC is just some nice cheesy topping on top of protobuf. Google's
implementation of protobuf, to be more precise.
It does not need handwritten marshallers, but the 'No need for
marshaller' does not accurately describe it. Marshallers are
needed and are generated under the cover by the library and so are
the data objects and you are unfortunately forced to use them.
That's both the good news and the bad news:) The whole thing looks
very promising and friendly for many uses cases, especially for
demos and PoCs :))). Nobody wants to write those marshallers. But
it starts to become a nuisance if you want to use your own data
objects.
There is also the ugliness and excessive memory footprint of the
generated code, which is the reason Infinispan did not adopt the
protobuf-java library although it did adopt protobuf as an
encoding format.
The Protostream library was created as an alternative
implementation to solve the aforementioned problems with the
generated code. It solves this by letting the user provide their
own data objects. And for the marshallers it gives you two
options: a) write the marshaller yourself (hated), b) annotated
your data objects and the marshaller gets generated (loved).
Protostream does not currently support service definitions right
now but this is something I started to investigate recently after
Galder asked me if I think it's doable. I think I'll only find out
after I do it:)

Adrian


On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote:

Hi Infinispan developers,

I'm working on a solution for developers who need to access
Infinispan services  through different programming languages.

The focus is not on developing a full featured client, but rather
discover the value and the limits of this approach.

- is it possible to automatically generate useful clients in
different languages?
- can that clients interoperate on the same cache with the same
data types?

I came out with a small prototype that I would like to submit to
you and on which I would like to gather your impressions.

 You can found the project here [1]: is a gRPC-based
client/server architecture for Infinispan based on and
EmbeddedCache, with very few features exposed atm.

Currently the project is nothing more than a poc with the
following interesting features:

- client can be generated in all the grpc supported language:
java, go, c++ examples are provided;
- the interface is full typed. No need for marshaller and clients
build in different language can cooperate on the same cache;

The second item is my preferred one beacuse it frees the
developer from data marshalling.

What do you think about?
Sounds interesting?
Can you see any flaw?

There's also a list of issues for the future [2], basically I
would like to investigate these questions:
How far this architecture can go?
Topology, events, 

Re: [infinispan-dev] Infinispan client/server architecture based on gRPC

2018-05-28 Thread Adrian Nistor

Hi Vittorio,
thanks for exploring gRPC. It seems like a very elegant solution for 
exposing services. I'll have a look at your PoC soon.


I feel there are some remarks that need to be made regarding gRPC. gRPC 
is just some nice cheesy topping on top of protobuf. Google's 
implementation of protobuf, to be more precise.
It does not need handwritten marshallers, but the 'No need for 
marshaller' does not accurately describe it. Marshallers are needed and 
are generated under the cover by the library and so are the data objects 
and you are unfortunately forced to use them. That's both the good news 
and the bad news:) The whole thing looks very promising and friendly for 
many uses cases, especially for demos and PoCs :))). Nobody wants to 
write those marshallers. But it starts to become a nuisance if you want 
to use your own data objects.
There is also the ugliness and excessive memory footprint of the 
generated code, which is the reason Infinispan did not adopt the 
protobuf-java library although it did adopt protobuf as an encoding format.
The Protostream library was created as an alternative implementation to 
solve the aforementioned problems with the generated code. It solves 
this by letting the user provide their own data objects. And for the 
marshallers it gives you two options: a) write the marshaller yourself 
(hated), b) annotated your data objects and the marshaller gets 
generated (loved). Protostream does not currently support service 
definitions right now but this is something I started to investigate 
recently after Galder asked me if I think it's doable. I think I'll only 
find out after I do it:)


Adrian

On 05/28/2018 04:15 PM, Vittorio Rigamonti wrote:

Hi Infinispan developers,

I'm working on a solution for developers who need to access Infinispan 
services  through different programming languages.


The focus is not on developing a full featured client, but rather 
discover the value and the limits of this approach.


- is it possible to automatically generate useful clients in different 
languages?
- can that clients interoperate on the same cache with the same data 
types?


I came out with a small prototype that I would like to submit to you 
and on which I would like to gather your impressions.


 You can found the project here [1]: is a gRPC-based client/server 
architecture for Infinispan based on and EmbeddedCache, with very few 
features exposed atm.


Currently the project is nothing more than a poc with the following 
interesting features:


- client can be generated in all the grpc supported language: java, 
go, c++ examples are provided;
- the interface is full typed. No need for marshaller and clients 
build in different language can cooperate on the same cache;


The second item is my preferred one beacuse it frees the developer 
from data marshalling.


What do you think about?
Sounds interesting?
Can you see any flaw?

There's also a list of issues for the future [2], basically I would 
like to investigate these questions:

How far this architecture can go?
Topology, events, queries... how many of the Infinispan features can 
be fit in a grpc architecture?


Thank you
Vittorio

[1] https://github.com/rigazilla/ispn-grpc 

[2] https://github.com/rigazilla/ispn-grpc/issues 



--

Vittorio Rigamonti

Senior Software Engineer

Red Hat



Milan, Italy

vriga...@redhat.com 

irc: rigazilla




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Search keys by query

2018-04-23 Thread Adrian Nistor

Hi Sergey,
keys are just keys. Lookup by key works only if you know the key in 
advance, be it a simple or complex key.
Keys are not indexed. So no, searching for keys does not work and there 
is no plan to support that. It's one of the many things Infinispan 
cannot do because it is not a relational database and we do no plan to 
become one :).
But there are ways to overcome this limitation. You already de-normalize 
your data when placing it in the grid, because Infinispan does not 
manage relations. During this process you should copy relevant 
properties of the key into the value itself if you intend to search by 
those properties.


Adrian

On 04/23/2018 11:26 AM, Sergey Chernolyas wrote:

*Hi! *
*I want ask about search keys. For example, I have a complex key and 
the complex key (POJO) have a field “type”.  It is logical if I find 
all keys with required type by query. Now query for complex keys not 
work. Method “list()” return empty list(). Is the feature implementable?*


--
-

With best regards, Sergey Chernolyas


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Passing client listener parameters programmatically

2018-04-16 Thread Adrian Nistor

+1 for both points.

And I absolutely have to add that I never liked the annotation based 
listeners, both the embedded and the remote ones.


On 04/16/2018 10:48 AM, Dan Berindei wrote:
+1 to not require annotations, but -100 to ignore the annotations if 
present, we should throw an exception instead.


Dan

On Fri, Apr 13, 2018 at 9:57 PM, William Burns > wrote:


I personally have never been a fan of the whole annotation thing
to configure your listener, unfortunately it just has been this way.

If you are just proposing to adding a new addClientListener method
that takes those arguments, I don't have a problem with it.

void addClientListener(Object listener, String filterFactoryName,
Object[] filterFactoryParams, String converterFactoryName,
Object[] converterFactoryParams);

I would think we would use these values only and ignore any
defined on the annotation.


Also similar to this but I have some API ideas I would love to
explore for ISPN 10 surrounding events and the consumption of them.

 - Will

On Fri, Apr 13, 2018 at 11:12 AM Galder Zamarreno
> wrote:

Hi,

We're working with the OpenWhisk team to create a generic Feed
that allows Infinispan remote events to be exposed in an
OpenWhisk way.

So, you'd pass in Hot Rod endpoint information, name of cache
and other details and you'd establish a feed of data from that
cache for create/updated/removed data.

However, making this generic is tricky when you want to pass
in filter/converter factory names since these are defined at
the annotation level.

Ideally we should have a way to pass in filter/converter
factory names programmatically. To avoid limiting ourselves,
you could potentially pass in an instance of the annotation in
an overloaded method or as optional parameter [1].

Thoughts?

Cheers,
Galder

[1]

https://stackoverflow.com/questions/16299717/how-to-create-an-instance-of-an-annotation


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org

https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org 
https://lists.jboss.org/mailman/listinfo/infinispan-dev





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Protobuf metadata cache and x-site

2018-04-12 Thread Adrian Nistor
Backing up caches with protobuf payload to a remote site will not work 
if they are indexed, unless the remote site already has the schema for 
the types in question, or else indexing will fail. If the cache is not 
indexed it matters less.

So the replication of protobuf metadata cache has to be arranged somehow 
before any other data is replicated. Manual replication is indeed PITA.

I remember in the very early version of remote query the protobuf 
metadata cache configuration was created programatically on startup 
unless a manually defined configuration with that name was found, 
already provided by the user. In that case the user's config was used. 
This approach had the benefit of allowing the user to gain control if 
needed. But can also lead to gloom and doom. Was that too bad to do it 
again :)))?

Adrian

On 04/12/2018 10:27 PM, Tristan Tarrant wrote:
> It is definitely an internal cache. Because of this, automatically
> backing it up to a remote site might not be such a good idea.
>
> Backups are enabled per-cache, and therefore just blindly replicating
> the schema cache to the other site is not a good idea.
>
> I think that we need a cache-manager-level backup setting that does the
> right thing.
>
> Tristan
>
> On 4/12/18 7:01 PM, Pedro Ruivo wrote:
>> Wouldn't be better to assume the protobuf cache doesn't fit the internal
>> cache use case? :)
>>
>> On 12-04-2018 17:21, Galder Zamarreno wrote:
>>> Ok, we do need to find a better way to deal with this.
>>>
>>> JIRA: https://issues.jboss.org/browse/ISPN-9074
>>>
>>> On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo >> > wrote:
>>>
>>>
>>>
>>>   On 12-04-2018 15:49, Galder Zamarreno wrote:
>>>> Hi,
>>>>
>>>> We have an issue with protobuf metadata cache.
>>>>
>>>> If you run in a multi-site scenario, protobuf metadata
>>>   information does
>>>> not travel across sites by default.
>>>>
>>>> Being an internal cache, is it possible to somehow
>>>   override/reconfigure
>>>> it so that cross-site configuration can be added in standalone.xml?
>>>
>>>   No :( since it is an internal cache, its configuration can't be 
>>> changed.
>>>
>>>>
>>>> We're currently running a periodic job that checks if the
>>>   metadata is
>>>> present and if not present add it. So, we have a workaround for
>>>   it, but
>>>> it'd be not very user friendly for end users.
>>>>
>>>> Thoughts?
>>>
>>>   Unfortunately none... it is the first time an internal cache needs
>>>   to do
>>>   some x-site.
>>>
>>>>
>>>> Cheers,
>>>> Galder
>>>>
>>>>
>>>> ___
>>>> infinispan-dev mailing list
>>>> infinispan-dev@lists.jboss.org
>>>   
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>   ___
>>>   infinispan-dev mailing list
>>>   infinispan-dev@lists.jboss.org 
>>>   https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>
>>>
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Ordering of includeCurrentState events

2018-02-22 Thread Adrian Nistor
Hi Radim,

 From the continuous query point of view it does not matter if 
'existing-state-events' are queued for a while as long as they are 
delivered _before_ the 'online' events. For CQ we do not care to make 
them distinguishable, but we do want this order!

Other use cases might have different needs (probably more relaxed), but 
this is the minimal for CQ.

Adrian

On 02/22/2018 11:09 AM, Radim Vansa wrote:
> Currently remote events caused by includeCurrentState=true are not
> guaranteed to be delivered before the operation completes; these are
> only queued on the server to be sent but not actually sent over wire.
>
> Do we want any such guarantee? Do we want to add to make events from
> current state somehow distinguishable from the 'online' ones?
>
> Given all the non-reliability with listeners failover I don't think this
> is needed, but I'll rather check in the crowd.
>
> Radim
>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.2.0.Beta1 and 9.1.3.Final have been released

2017-11-14 Thread Adrian Nistor
Hello everyone,

Our first beta release of the Infinispan 9.2 stream is available, as 
well as a new release of our stable branch (9.1).

I welcome you to read all about it on our team blog at https://goo.gl/asvaS3

Cheers,
Adrian

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore

2017-11-06 Thread Adrian Nistor
Different internal caches have different needs regarding consistency, 
tx, persistence, etc...
The first incarnation of ClusterRegistry was using a single cache and 
was implemented exactly as you suggested, but had major shortcomings 
satisfying the needs of several unrelated users, so we decided to split.

On 11/03/2017 10:42 AM, Radim Vansa wrote:
> Because you would have to duplicate entire Map on each update, unless
> you used not-100%-so-far functional commands. We've used the ScopedKey
> that would make this Cache<ScopedKey<PURPOSE, Object>, Object>. This
> approach was abandoned with ISPN-5932 [1], Adrian and Tristan can
> elaborate why.
>
> Radim
>
> [1] https://issues.jboss.org/browse/ISPN-5932
>
> On 11/03/2017 09:05 AM, Sebastian Laskawiec wrote:
>> I'm pretty sure it's a silly question, but I need to ask it :)
>>
>> Why can't we store all our internal information in a single,
>> replicated cache (of a type <PURPOSE, Map<Object, Object>). PURPOSE
>> could be an enum or a string identifying whether it's scripting cache,
>> transaction cache or anything else. The value (Map<Object, Object>)
>> would store whatever you need.
>>
>> On Fri, Nov 3, 2017 at 2:24 AM Sanne Grinovero <sa...@infinispan.org
>> <mailto:sa...@infinispan.org>> wrote:
>>
>>  On 2 November 2017 at 22:20, Adrian Nistor <anis...@redhat.com
>>  <mailto:anis...@redhat.com>> wrote:
>>  > I like this proposal.
>>
>>  +1
>>
>>  > On 11/02/2017 03:18 PM, Galder Zamarreño wrote:
>>  >> Hi all,
>>  >>
>>  >> I'm currently going through the JCache 1.1 proposed changes,
>>  and one that made me think is [1]. In particular:
>>  >>
>>  >>> Caches do not use forward slashes (/) or colons (:) as part of
>>  their names. Additionally it is
>>  >>> recommended that cache names starting with java. or
>>  javax.should not be used.
>>  >> I'm wondering whether in the future we should move away from
>>  the triple underscore trick we use for internal cache names, and
>>  instead just prepend them with `org.infinispan`, which is our
>>  group id. I think it'd be cleaner.
>>  >>
>>  >> Thoughts?
>>  >>
>>  >> [1] https://github.com/jsr107/jsr107spec/issues/350
>>  >> --
>>  >> Galder Zamarreño
>>  >> Infinispan, Red Hat
>>  >>
>>  >>
>>  >> ___
>>  >> infinispan-dev mailing list
>>  >> infinispan-dev@lists.jboss.org
>>  <mailto:infinispan-dev@lists.jboss.org>
>>  >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>  >
>>  >
>>  > ___
>>  > infinispan-dev mailing list
>>  > infinispan-dev@lists.jboss.org
>>  <mailto:infinispan-dev@lists.jboss.org>
>>  > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>  ___
>>  infinispan-dev mailing list
>>  infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>>  https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore

2017-11-02 Thread Adrian Nistor
I like this proposal.

On 11/02/2017 03:18 PM, Galder Zamarreño wrote:
> Hi all,
>
> I'm currently going through the JCache 1.1 proposed changes, and one that 
> made me think is [1]. In particular:
>
>> Caches do not use forward slashes (/) or colons (:) as part of their names. 
>> Additionally it is
>> recommended that cache names starting with java. or javax.should not be used.
> I'm wondering whether in the future we should move away from the triple 
> underscore trick we use for internal cache names, and instead just prepend 
> them with `org.infinispan`, which is our group id. I think it'd be cleaner.
>
> Thoughts?
>
> [1] https://github.com/jsr107/jsr107spec/issues/350
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Transactional consistency of query

2017-07-31 Thread Adrian Nistor
Yup, I also meant 'eventually consistent' when saying such 
inconsistencies should be acceptable. At some point in time after 
transactions have been committed and topology changes have been handled 
(state transfer completed) and we have a steady state we should see a 
consistent index when querying.


On 07/31/2017 11:41 AM, Gustavo Fernandes wrote:
IMO, indexing should be eventually consistent, as this offers the best 
performance.


On tx-caches, although Lucene has hooks to be enlisted in a 
transaction [1], some backends (elasticsearch) don't
expose this, and Hibernate Search by design doesn't make use of it. So 
currently we must deal with inconsistencies

after the fact: checking for nulls, mismatched types and so on.

[1] 
https://lucene.apache.org/core/6_0_1/core/org/apache/lucene/index/TwoPhaseCommit.html



On Fri, Jul 28, 2017 at 1:59 PM, Adrian Nistor <anis...@redhat.com 
<mailto:anis...@redhat.com>> wrote:


My feeling regarding this was to accept such inconsistencies, but
maybe
I'm wrong. I've always regarded indexing as being async in
general, even
though it did behave as if being sync in some not so rare
circumstances,
which probably made people believe it is expected to be sync in
general.
I'm curious what Sanne and Gustavo have in mind.

Please note that updating the index synchronously during tx commit was
always regarded as a performance bottleneck, so it was out of the
question.

And that would not always work anyway, it all depends on the
underlying indexing technology. For example when using HS with elastic
search you have to accept that elastic indexing is always async.

And there might not be an index at all. It's very possible that the
query runs unindexed. In that case it will use distributed streams
which
have their own transaction issues.

In the past we had some bugs were a matching entry was deleted/evicted
right before the search results were returned to the user, so
loading of
those values failed in a silent way. Those queries mistakenly returned
some unexpected nulls among other valid results. The fix was to just
filter out those nulls. We could enhance that to double check that the
returned entry is indeed of the requested type, to also cover the
issue
that you encountered.

Adrian

On 07/28/2017 01:38 PM, Radim Vansa wrote:
> Hi,
>
> while working on ISPN-7806 I am wondering how should queries
work with
> transactions. Right now it seems that updates to index are done
during
> either regular command execution (on originator [A]) or prepare
command
> on remote nodes [B]. Both of these cause rolled-back
transactions to be
> seen, so these must be treated as bugs [C].
>
> If we index the data after committing the transaction, there
would be a
> time window when we could see the updated entries but the index
would
> not reflect that. That might be acceptable limitation if a
> query-matching misses some entity, but it's also possible that we
> retrieve the query result key-set and then (after retrieving full
> entities) we return something that does not match the query. One
of the
> reproducers for ISPN-7806 I've written [1] triggers a situation
where
> listing all Persons could return Animal (different entity type),
so I
> think that there's no validity post-check (though these reproducers
> don't use transactions).
>
> Therefore, I wonder if the index should contain only the key;
maybe we
> should store an unique version and invalidate the query if some
of the
> entries has changed.
>
> If we index the data before committing the transaction, similar
> situation could happen: the index will return keys for entities that
> will match in the future but the actually returned list will contain
> stale entities.
>
> What's the overall plan? Do we just accept inconsistencies? In that
> case, please add a verbose statement in docs and point me to that.
>
> And if I've misinterpreted something and raised the red flag in
error,
> please let me know.
>
> Radim
>
> [A] This seems to be a regression after moving towards async
> interceptors - our impl of
> org.hibernate.search.backend.TransactionContext is incorrectly
bound to
> TransactionManager. Then we seem to be running out of
transaction and
> are happy to index it right away. The thread that executes the
> interceptor handler is also dependent on ownership (due to remote
> LockCommand execution), so I think that it does not fail the
local-mode
> tests.
>
> [B] ... and it does so twice as a regression after ISPN-7840 but
  

Re: [infinispan-dev] Transactional consistency of query

2017-07-28 Thread Adrian Nistor
My feeling regarding this was to accept such inconsistencies, but maybe 
I'm wrong. I've always regarded indexing as being async in general, even 
though it did behave as if being sync in some not so rare circumstances, 
which probably made people believe it is expected to be sync in general. 
I'm curious what Sanne and Gustavo have in mind.

Please note that updating the index synchronously during tx commit was 
always regarded as a performance bottleneck, so it was out of the 
question. And that would not always work anyway, it all depends on the 
underlying indexing technology. For example when using HS with elastic 
search you have to accept that elastic indexing is always async.

And there might not be an index at all. It's very possible that the 
query runs unindexed. In that case it will use distributed streams which 
have their own transaction issues.

In the past we had some bugs were a matching entry was deleted/evicted 
right before the search results were returned to the user, so loading of 
those values failed in a silent way. Those queries mistakenly returned 
some unexpected nulls among other valid results. The fix was to just 
filter out those nulls. We could enhance that to double check that the 
returned entry is indeed of the requested type, to also cover the issue 
that you encountered.

Adrian

On 07/28/2017 01:38 PM, Radim Vansa wrote:
> Hi,
>
> while working on ISPN-7806 I am wondering how should queries work with
> transactions. Right now it seems that updates to index are done during
> either regular command execution (on originator [A]) or prepare command
> on remote nodes [B]. Both of these cause rolled-back transactions to be
> seen, so these must be treated as bugs [C].
>
> If we index the data after committing the transaction, there would be a
> time window when we could see the updated entries but the index would
> not reflect that. That might be acceptable limitation if a
> query-matching misses some entity, but it's also possible that we
> retrieve the query result key-set and then (after retrieving full
> entities) we return something that does not match the query. One of the
> reproducers for ISPN-7806 I've written [1] triggers a situation where
> listing all Persons could return Animal (different entity type), so I
> think that there's no validity post-check (though these reproducers
> don't use transactions).
>
> Therefore, I wonder if the index should contain only the key; maybe we
> should store an unique version and invalidate the query if some of the
> entries has changed.
>
> If we index the data before committing the transaction, similar
> situation could happen: the index will return keys for entities that
> will match in the future but the actually returned list will contain
> stale entities.
>
> What's the overall plan? Do we just accept inconsistencies? In that
> case, please add a verbose statement in docs and point me to that.
>
> And if I've misinterpreted something and raised the red flag in error,
> please let me know.
>
> Radim
>
> [A] This seems to be a regression after moving towards async
> interceptors - our impl of
> org.hibernate.search.backend.TransactionContext is incorrectly bound to
> TransactionManager. Then we seem to be running out of transaction and
> are happy to index it right away. The thread that executes the
> interceptor handler is also dependent on ownership (due to remote
> LockCommand execution), so I think that it does not fail the local-mode
> tests.
>
> [B] ... and it does so twice as a regression after ISPN-7840 but that's
> easy to fix.
>
> [C] Indexing in prepare command was OK before ISPN-7840 with pessimistic
> locking which does not send the CommitCommand, but now that the QI has
> been moved below EWI it means that we're indexing before storing the
> actual values. Optimistic locking was not correct, though.
>
> [1]
> https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546
>
>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly IRC Meeting Logs 2017-07-24

2017-07-24 Thread Adrian Nistor
Hi all,

The weekly meeting logs are attached. jbott is missing in action again. 
RIP jbott!

Cheers,
Adrian

-

(17:03:54) anistor: dberindei: gustavonalle: jsenko: karesti: pruivo: 
rigazilla: rvansa: ttarrant: vblagoje:   everybody ready for the meeting?
(17:04:01) vblagoje: +1
(17:04:03) dberindei: sure anistor
(17:04:12) karesti: :o)
(17:04:38) rvansa: braced
(17:04:44) jsenko: anistor: skip me please:/
(17:04:53) jsenko: no updates
(17:04:56) anistor: ok.
(17:04:58) anistor: I believe ttarrant, gustavonalle, ryan and sebastian 
are in another meeting right now, so we can skip them too :)
(17:05:32) anistor: #startmeeting
(17:05:39) anistor: #topic anistor
(17:06:20) anistor: it seems jbott died july 14th, and was not 
resurected :)
(17:06:53) pruivo: lol
(17:07:37) anistor: last week I helped a bit with reviewing a PR from 
gustavo, who implemented json to prorobuf conversion in Protostream. 
great work! it is now in, but we'll need to release protostream 4.2 to 
make it into ISPN
(17:08:56) anistor: while on the occasion of fiddling with protostream I 
restarted my previous work of migrating to protrobuf 3.x schema support, 
which might take forever.
(17:10:21) anistor: I also started an article and short tutorial on 
using remote infinispan with plain serialization/jboss marshalling with 
query et all. no protobuf involved
(17:10:46) anistor: this deserves a blog post too. coming soon
(17:11:14) anistor: that's about all
(17:12:15) anistor: this week I plan to fix whatever is not working in 
the deployment of lucene analyzers in wildfly. I hit a wall the requires 
debugging wildlfy and I'm a bit stuck. adding more logging did not help ...
(17:12:43) anistor: #topic dberindei
(17:13:34) dberindei: I haven't attended the last meeting as I was on 
PTO on Monday
(17:14:02) dberindei: I finally opened a (preview) PR for ISPN-7919
(17:14:03) jbossbot: jira [ISPN-7919] Expose ResponseCollector in the 
RpcManager interface [Pull Request Sent (Unresolved) Task, Major, Core, 
Dan Berindei] https://issues.jboss.org/browse/ISPN-7919
(17:14:25) wsiqueir-brb is now known as wsiqueir
(17:15:20) dberindei: in some ways it's a lot less than I wanted to 
change, because I'm still using a MapResponseCollector for most RPCs
(17:15:33) dberindei: in other ways maybe I changed too much, because I 
had too many test failures to fix :)
(17:16:01) dberindei: I also wrote the usual random PR comments
(17:16:29) dberindei: and I made the xsite tests run in parallel with 
ISPN-5476
(17:16:30) jbossbot: jira [ISPN-5476] Cross-site tests should run in 
parallel [Pull Request Sent (Unresolved) Task, Major, Core/Cross-Site 
Replication/Test Suite - Core, Dan Berindei] 
https://issues.jboss.org/browse/ISPN-5476
(17:16:48) dberindei: now waiting for another run in CI
(17:17:37) dberindei: I'm now trying to figure out what's still wrong 
with my ISPN-7997 PR, because it seems to break ScatteredStreamIteratorTest
(17:17:38) jbossbot: jira [ISPN-7997] 
DistributedStreamIteratorTest.testLocallyForcedStream random failure 
[Pull Request Sent (Unresolved) Bug, Critical, Test Suite - Core, Dan 
Berindei] https://issues.jboss.org/browse/ISPN-7997
(17:18:03) dberindei: that's it for me, karesti next?
(17:18:21) karesti: yes, thankyou dberindei
(17:18:28) karesti: #topic karesti
(17:18:44) ttarrant: 654523
(17:18:46) ttarrant: 279432
(17:19:28) ttarrant: 862914
(17:19:50) ttarrant: 611346
(17:19:52) ttarrant: 190773
(17:19:54) ttarrant: 048715
(17:20:05) anistor: ttarrant: lucky or unlucky numbers?
(17:21:19) karesti: last week I was stuck making hotrod multimap work, I 
managed to unblock and I will probably open a PR soon. Meanwhile rvansa 
came back and thank you for your reviews etc ! so we merged ISPN-7752
(17:21:20) jbossbot: jira [ISPN-7752] Merge [Pull Request Sent 
(Unresolved) Sub-task, Major, Katia Aresti] 
https://issues.jboss.org/browse/ISPN-7752
(17:21:23) ttarrant: anistor, :)
(17:21:27) vblagoje: it is his ybikey
(17:22:36) karesti: I reported rvansa and other's comments on 
https://github.com/infinispan/infinispan/pull/5271
(17:22:37) jbossbot: git pull req [infinispan] (open) Katia Aresti 
ISPN-7993 Encoding support on functional maps 
https://github.com/infinispan/infinispan/pull/5271
(17:22:38) jbossbot: jira [ISPN-7993] Functional commands don't support 
Data convertions [Pull Request Sent (Unresolved) Feature Request, Major, 
Core, Katia Aresti] https://issues.jboss.org/browse/ISPN-7993
(17:24:32) karesti: and 
https://github.com/infinispan/infinispan/pull/5193 can be reviewed and 
merged just after 7993. I would like to do it this week. Embedded 
multimap is experimental and a separate building block, so even if its 
not perfect yet, tis can be easily changed and improved
(17:25:09) rvansa: karesti: isn't 7993 blocked by Gustavo?
(17:25:23) karesti: rvansa I don't know if this can be merged 

Re: [infinispan-dev] Write-only commands

2017-06-30 Thread Adrian Nistor

<>

Dan, drawing the conversation into absurdity is not useful.



On 06/29/2017 03:36 PM, Dan Berindei wrote:
> On Thu, Jun 29, 2017 at 2:19 PM, Radim Vansa <rva...@redhat.com> wrote:
>> On 06/29/2017 11:16 AM, Dan Berindei wrote:
>>> On Thu, Jun 29, 2017 at 11:53 AM, Radim Vansa <rva...@redhat.com> wrote:
>>>> On 06/28/2017 04:20 PM, Dan Berindei wrote:
>>>>> On Wed, Jun 28, 2017 at 2:17 PM, Radim Vansa <rva...@redhat.com> wrote:
>>>>>> On 06/28/2017 10:40 AM, Dan Berindei wrote:
>>>>>>> On Wed, Jun 28, 2017 at 10:17 AM, Radim Vansa <rva...@redhat.com> wrote:
>>>>>>>> On 06/27/2017 03:54 PM, Dan Berindei wrote:
>>>>>>>>> On Tue, Jun 27, 2017 at 2:43 PM, Adrian Nistor <anis...@redhat.com> 
>>>>>>>>> wrote:
>>>>>>>>>> I've said this in a previous thread on this same issue, I will 
>>>>>>>>>> repeat myself
>>>>>>>>>> as many times as needed.
>>>>>>>>>>
>>>>>>>>>> Continuous queries require the previous value itself, not just 
>>>>>>>>>> knowledge of
>>>>>>>>>> the type of the previous value. Strongly typed caches solve no 
>>>>>>>>>> problem here.
>>>>>>>>>>
>>>>>>>>>> So if we half-fix query but leave CQ broken I will be half-happy 
>>>>>>>>>> (ie. very
>>>>>>>>>> depressed) :)
>>>>>>>>>>
>>>>>>>>>> I'd remove these commands completely or possibly remove them just 
>>>>>>>>>> from
>>>>>>>>>> public API and keep them internal.
>>>>>>>>>>
>>>>>>>>> +1 to remove the flags from the public API. Most of them are not safe
>>>>>>>>> for applications to use, and ignoring them when they can lead to
>>>>>>>>> inconsistencies would make them useless.
>>>>>>>>>
>>>>>>>>> E.g. the whole point of SKIP_INDEX_CLEANUP is that the cache doesn't
>>>>>>>>> know when it is safe to skip the delete statement, and it relies on
>>>>>>>>> the application making a (possibly wrong) choice.
>>>>>>>>>
>>>>>>>>> IGNORE_RETURN_VALUES should be safe to use, and we actually recommend
>>>>>>>>> that applications use it right now. If query or listeners need the
>>>>>>>>> previous value, then we should load it internally, but hide it from
>>>>>>>>> the user.
>>>>>>>>>
>>>>>>>>> But removing it opens another discussion: should we replace it in the
>>>>>>>>> public API with a new method AdvancedCache.ignoreReturnValues(), or
>>>>>>>>> should we make it the default and add a method
>>>>>>>>> AdvancedCache.forceReturnPreviousValues()?
>>>>>>>> Please don't derail the thread.
>>>>>>>>
>>>>>>> I don't think I'm derailing the thread: IGNORE_PREVIOUS_VALUES also
>>>>>>> breaks the previous value for listeners, even if the QueryInterceptor
>>>>>>> removes it from write commands. And it is public (+recommended) API,
>>>>>>> in fact most if not all of our performance tests use it.
>>>>>> That's just a flawed implementation. IPV is documented to be a 'safe'
>>>>>> flag that should affect mostly primary -> origin replication, all the
>>>>>> other is implementation. And we can fix that. Users should *not* expect
>>>>>> that it e.g. skips loading from a cache store. We have already removed
>>>>>> the modes that would be broken-by-design.
>>>>>>
>>>>> I think you're confusing IGNORE_RETURN_VALUES with SKIP_REMOTE_LOOKUP
>>>>> here. The IVR javadoc doesn't say anything about remote lookups, only
>>>>> SRL does.
>>>> No, I am not; While IRV does not mention the replication, it's said to
>>>> be 'safe'. So omitting the primary -> origin replication is basically
>>>> all it can do when listeners are in place. You're right that I have
>>>> missed the second part in SRL talking about

Re: [infinispan-dev] Feedback for PR 5233 needed

2017-06-29 Thread Adrian Nistor
People, don't be shy, the PR is in now, but things can still change 
based on you feedback. We still have two weeks until we release the Final.

On 06/29/2017 03:45 PM, Adrian Nistor wrote:
> This pr [1] adds a new approach for defining the compat marshaller class
> and the indexed entity classes (in server), and the same approach could
> be used in future for deployment of encoders,  lucene analyzers and
> possilby other code bits that a user would want to add a server in order
> to implement an extension point that we support.
>
> Your feedback is wellcome!
>
> [1] https://github.com/infinispan/infinispan/pull/5233
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Feedback for PR 5233 needed

2017-06-29 Thread Adrian Nistor
This pr [1] adds a new approach for defining the compat marshaller class 
and the indexed entity classes (in server), and the same approach could 
be used in future for deployment of encoders,  lucene analyzers and 
possilby other code bits that a user would want to add a server in order 
to implement an extension point that we support.

Your feedback is wellcome!

[1] https://github.com/infinispan/infinispan/pull/5233

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Write-only commands

2017-06-27 Thread Adrian Nistor
The needs for continuous query are stronger than and include the needs 
of normal query.
So by fixing what is needed for continuous query we can all live a happy 
life without needing to provide "*yet another* form of previous-version 
loading". The reverse is not true.
You'll have to convince me this extra mechanism's added complexity is 
worth the effort :)


On 06/28/2017 12:00 AM, Sanne Grinovero wrote:
> On 27 June 2017 at 13:43, Adrian Nistor <anis...@redhat.com> wrote:
>> I've said this in a previous thread on this same issue, I will repeat myself
>> as many times as needed.
>>
>> Continuous queries require the previous value itself, not just knowledge of
>> the type of the previous value. Strongly typed caches solve no problem here.
> Why do you need to repeat this, Radim already captured this
> requirement for CQ in the premise of this very thread?
>
>> So if we half-fix query but leave CQ broken I will be half-happy (ie. very
>> depressed) :)
> There must be a misunderstanding. I merely highlighted the needs for
> indexed queries, and point out that it might be useful to have *yet
> another* form of previous-version loading as this specific
> circumstance would just need some metadata. I'd never suggest to
> replace or remove the capability to load the "full" previous version,
> definitely not meaning to suggest to break CQ and many other essential
> use cases.
>
>> I'd remove these commands completely or possibly remove them just from
>> public API and keep them internal.
>>
>> Adrian
>>
>>
>>
>> On 06/27/2017 01:28 PM, Sanne Grinovero wrote:
>>
>>
>>
>> On 27 Jun 2017 10:13, "Radim Vansa" <rva...@redhat.com> wrote:
>>
>> Hi,
>>
>> I am working on entry version history (again). In Como we've discussed
>> that previous values are needed for (continuous) query and reliable
>> listeners,
>>
>>
>> Index based queries also require the previous value on a write - unless we
>> can get "strongly typed caches" giving guarantees about the class to
>> represent the content of a cache to be unique.
>>
>> Essentially we only need to know the type of the previous object. It might
>> be worth having a way to load the type metadata if the previous value only.
>>
>> so I wonder what should we do with functional write-only
>> commands. These are different to commands with flags, because flags
>> (other than ignore return value) are expected to break something.
>>
>>
>> Sorry I hope to not derail the thread but let's remind that we hope to
>> evolve beyond "flags are expected to break stuff" ; we never got to it but
>> search the mailing list.
>>
>> Since flags are exposed to the user I would rather they're not allowed to
>> break things.
>> Could they be treated as hints? Ignore the flag (and warn?) if the used
>> configuration/integrations veto them.
>>
>> Alternatively, let's remove them from API. Remember "The Jokre" POC was
>> intentionally designed to explore pushing the limits on performance w/o end
>> users having to solve puzzles, such as learning details about these flags
>> and their possible side effects.
>>
>> So assuming they become either "safe" or internal, maybe you can take
>> advantage of them?
>>
>> I see
>> the available options as:
>>
>> 1) run write-only commands 'optimized', ignoring any querying and such
>> (warn user that he will break it)
>>
>> 2) run write-only without any optimization, rendering them useless
>>
>> 3) detect when querying is set up (ignoring listeners and maybe other
>> stuff that could get broken)
>>
>>
>> Might be useful for making a POC work, but I believe query will be very
>> likely to be often enabled.
>> Having an either / or switch for different features in Infinispan will make
>> it harder to use and understand, so I'd rather see work on the right design
>> as taking temporary shortcuts risks baking into stone features which we
>> later struggle to fix or maintain.
>>
>>
>> 4) remove write-only commands completely (and probably functional
>> listeners as well because these will lose their purpose)
>>
>>
>> +1 to remove "unconditional writes", at least an entry version check should
>> be applied.
>> I believe we had already pointed out this would eventually happen, pretty
>> much for the reasons you're hitting now.
>>
>>
>> Right now I am inclined towards 4). There could be some internal use
>> (e.g. multimaps) tha

Re: [infinispan-dev] Write-only commands

2017-06-27 Thread Adrian Nistor
I've said this in a previous thread on this same issue, I will repeat 
myself as many times as needed.


Continuous queries require the previous value itself, not just knowledge 
of the type of the previous value. Strongly typed caches solve no 
problem here.


So if we half-fix query but leave CQ broken I will be half-happy (ie. 
very depressed) :)


I'd remove these commands completely or possibly remove them just from 
public API and keep them internal.


Adrian


On 06/27/2017 01:28 PM, Sanne Grinovero wrote:



On 27 Jun 2017 10:13, "Radim Vansa" > wrote:


Hi,

I am working on entry version history (again). In Como we've discussed
that previous values are needed for (continuous) query and reliable
listeners, 



Index based queries also require the previous value on a write - 
unless we can get "strongly typed caches" giving guarantees about the 
class to represent the content of a cache to be unique.


Essentially we only need to know the type of the previous object. It 
might be worth having a way to load the type metadata if the previous 
value only.


so I wonder what should we do with functional write-only
commands. These are different to commands with flags, because flags
(other than ignore return value) are expected to break something.


Sorry I hope to not derail the thread but let's remind that we hope to 
evolve beyond "flags are expected to break stuff" ; we never got to it 
but search the mailing list.


Since flags are exposed to the user I would rather they're not allowed 
to break things.
Could they be treated as hints? Ignore the flag (and warn?) if the 
used  configuration/integrations veto them.


Alternatively, let's remove them from API. Remember "The Jokre" POC 
was intentionally designed to explore pushing the limits on 
performance w/o end users having to solve puzzles, such as learning 
details about these flags and their possible side effects.


So assuming they become either "safe" or internal, maybe you can take 
advantage of them?


I see
the available options as:

1) run write-only commands 'optimized', ignoring any querying and such
(warn user that he will break it)

2) run write-only without any optimization, rendering them useless

3) detect when querying is set up (ignoring listeners and maybe other
stuff that could get broken)


Might be useful for making a POC work, but I believe query will be 
very likely to be often enabled.
Having an either / or switch for different features in Infinispan will 
make it harder to use and understand, so I'd rather see work on the 
right design as taking temporary shortcuts risks baking into stone 
features which we later struggle to fix or maintain.



4) remove write-only commands completely (and probably functional
listeners as well because these will lose their purpose)


+1 to remove "unconditional writes", at least an entry version check 
should be applied.
I believe we had already pointed out this would eventually happen, 
pretty much for the reasons you're hitting now.



Right now I am inclined towards 4). There could be some internal use
(e.g. multimaps) that could use 1) which is ran without a fancy setup,
though, but it's asking for trouble.


I agree!

Thanks


WDYT?

Radim

--

Radim Vansa >
JBoss Performance Team

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org 
https://lists.jboss.org/mailman/listinfo/infinispan-dev





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Important feedback for transcoding work - Re: Quick fix for ISPN-7710

2017-06-15 Thread Adrian Nistor
Galder, I've seen AddProtobufTask in March or April when you mentioned 
this issue on the devlist; that approach can work for protostream 
marshallers or any other code bits that the Cache does not depend on 
during startup, and which can be deployed anytime later. In this 
category we currently have : filters, converters. These are currently 
deployed with the help of a DeploymentUnitProcessor, but we could have 
done it using a ServerTask as well. Now that we took the route of DUP, I 
think we should continue in a consistent manner and use it for other 
'deployables' we identify from now on, ie. the protobuf entity marshallers.

As for encoders, lucene analyzers, compatibility marshaller, event 
marshaller - these are all needed during cache startup. We need to come 
up with something for these, so I propose to look them up using the 
"moduleId:slot:className" convention.


On 06/15/2017 03:40 PM, Galder Zamarreño wrote:
> @Gustavo, some important info for your transcoding work below:
>
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>> On 15 Jun 2017, at 11:05, Adrian Nistor <anis...@redhat.com> wrote:
>>
>> Hi Galder,
>>
>> this fix is acceptable for now as it quickly enables users to use 
>> CompatibilityProtoStreamMarshaller (provided by Infinispan), but in the long 
>> run we would want users to be able to specify a custom marshaller class that 
>> comes from a user supplied module or even a deployment - the general case.
>>
>> With the introduction of encoders and deprecation of compat mode we still 
>> have the same class loading issue in the general case. So I propose to 
>> refine a bit our approach and instead of specifying just a class name we 
>> should use a naming schema like "moduleId:slot:className", giving users the 
>> ability to specify a class that comes from a different module or deployment. 
>> I'm currently experimenting with this. I'll come back with results soon.
>>
>> There are also other code bits that need to be deployed in the server ASAP: 
>> protostream entity marshallers, lucene analyzers. I'm thinking these could 
>> all benefit from the same solution.
> I was able to easily get around the issue of deploying protostream entity 
> marshallers by simply adding a server tasks that did that:
>
> https://github.com/infinispan-demos/swiss-transport-datagrid/blob/master/analytics/analytics-domain/src/main/java/delays/java/stream/proto/AddProtobufTask.java
>
> In fact, that server tasks acts serves as a way to add domain pojos to the 
> system... So when the server starts receiving data, it can deserialize it 
> without problems.
>
> However, there's a potential problem here you might want to consider in your 
> work: If I deploy the add protobuf task, write data, then redeploy the add 
> protobuf task, then retrieve some data, the system blows up because the 
> classloader of the domain POJOs has changed. So you'd start seeing 
> ClassCastException errors...
>
> That's why I think that even though in the past we'd store objects in 
> deserialized form, this could be problematic because you're committing to 
> domain objects with a given classloader...
>
> The more I think about it, the more I think we should keep data only in 
> binary format in the server. IOW, we should never try to keep it in 
> deserialized format. That way, no matter how many times the domain objects 
> are redeployed, assuming no compile-binary changes, the lazy transcoding 
> would work without problems.
>
>> Btw, what is the relation between ISPN-7814 and ISPN-7710 ?
> The relationship between them is explained here:
>
> https://github.com/infinispan-demos/swiss-transport-datagrid#infinispan-server-docker-image
>
> I would strongly recommend that you give that demo repository a try, you 
> might get new ideas on the work you're doing.
>
> Cheers,
>
>> Adrian
>>
>> On 06/14/2017 06:35 PM, Galder Zamarreño wrote:
>>> Hi all,
>>>
>>> I'm seeing more and more people trying to do stuff like I did in [1] WRT to 
>>> running server tasks in server.
>>>
>>> One of the blockers is [2]. I know we have transcoding coming up but I 
>>> wondered if we could implement the quick hack of referencing 
>>> remote-query.server module from root org.infinispan module.
>>>
>>> So, in essence, adding the following to org/infinispan/main/module.xml:
>>>
>>>
>>>
>>> Once ISPN-7710 is in place, along with ISPN-7814, users can run the demos 
>>> in [1] without a custom server build.
>>>
>>> Cheers,
>>>
>>> [1] https://github.com/infinispan-demos/swiss-transport-datagrid
>>> [2] https://issues.jboss.org/browse/ISPN-7710
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>>
>>>
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Quick fix for ISPN-7710

2017-06-15 Thread Adrian Nistor
Yup. That's exactly what I'm proposing. But today I stopped working on 
it because of some PR reviewing for the release. I hope to restart it 
and have a PoC by tomorrow.

On 06/15/2017 04:06 PM, Sanne Grinovero wrote:
> On 15 June 2017 at 10:05, Adrian Nistor <anis...@redhat.com> wrote:
>> Hi Galder,
>>
>> this fix is acceptable for now as it quickly enables users to use
>> CompatibilityProtoStreamMarshaller (provided by Infinispan), but in the
>> long run we would want users to be able to specify a custom marshaller
>> class that comes from a user supplied module or even a deployment - the
>> general case.
>>
>> With the introduction of encoders and deprecation of compat mode we
>> still have the same class loading issue in the general case. So I
>> propose to refine a bit our approach and instead of specifying just a
>> class name we should use a naming schema like "moduleId:slot:className",
>> giving users the ability to specify a class that comes from a different
>> module or deployment. I'm currently experimenting with this. I'll come
>> back with results soon.
> Great idea!
> Can you also figure out how to identify loading something from the
> user deployment ?
> Maybe we could apply a simple convention, as in when the module
> identification is missing then we look into TCCL, while when it's
> provided we look for the specific module (exclusively) ?
>
> Thanks,
> Sanne
>
>> There are also other code bits that need to be deployed in the server
>> ASAP: protostream entity marshallers, lucene analyzers. I'm thinking
>> these could all benefit from the same solution.
>>
>> Btw, what is the relation between ISPN-7814 and ISPN-7710 ?
>>
>> Adrian
>>
>> On 06/14/2017 06:35 PM, Galder Zamarreño wrote:
>>> Hi all,
>>>
>>> I'm seeing more and more people trying to do stuff like I did in [1] WRT to 
>>> running server tasks in server.
>>>
>>> One of the blockers is [2]. I know we have transcoding coming up but I 
>>> wondered if we could implement the quick hack of referencing 
>>> remote-query.server module from root org.infinispan module.
>>>
>>> So, in essence, adding the following to org/infinispan/main/module.xml:
>>>
>>> 
>>>
>>> Once ISPN-7710 is in place, along with ISPN-7814, users can run the demos 
>>> in [1] without a custom server build.
>>>
>>> Cheers,
>>>
>>> [1] https://github.com/infinispan-demos/swiss-transport-datagrid
>>> [2] https://issues.jboss.org/browse/ISPN-7710
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>>
>>>
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Quick fix for ISPN-7710

2017-06-15 Thread Adrian Nistor
Hi Galder,

this fix is acceptable for now as it quickly enables users to use 
CompatibilityProtoStreamMarshaller (provided by Infinispan), but in the 
long run we would want users to be able to specify a custom marshaller 
class that comes from a user supplied module or even a deployment - the 
general case.

With the introduction of encoders and deprecation of compat mode we 
still have the same class loading issue in the general case. So I 
propose to refine a bit our approach and instead of specifying just a 
class name we should use a naming schema like "moduleId:slot:className", 
giving users the ability to specify a class that comes from a different 
module or deployment. I'm currently experimenting with this. I'll come 
back with results soon.

There are also other code bits that need to be deployed in the server 
ASAP: protostream entity marshallers, lucene analyzers. I'm thinking 
these could all benefit from the same solution.

Btw, what is the relation between ISPN-7814 and ISPN-7710 ?

Adrian

On 06/14/2017 06:35 PM, Galder Zamarreño wrote:
> Hi all,
>
> I'm seeing more and more people trying to do stuff like I did in [1] WRT to 
> running server tasks in server.
>
> One of the blockers is [2]. I know we have transcoding coming up but I 
> wondered if we could implement the quick hack of referencing 
> remote-query.server module from root org.infinispan module.
>
> So, in essence, adding the following to org/infinispan/main/module.xml:
>
>
>
> Once ISPN-7710 is in place, along with ISPN-7814, users can run the demos in 
> [1] without a custom server build.
>
> Cheers,
>
> [1] https://github.com/infinispan-demos/swiss-transport-datagrid
> [2] https://issues.jboss.org/browse/ISPN-7710
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Deprecation of Index.LOCAL

2017-05-15 Thread Adrian Nistor

+1 to kill it.


On 05/15/2017 12:09 PM, Gustavo Fernandes wrote:
Hi, the Index.LOCAL setting was introduced eons ago to allow indexing 
to occur once cluster-wide;
thus it's recommended when using an IndexManager such as 
InfinispanIndexManager and ElasticsearchIndexManager that is shared 
among all nodes.


Furthermore, Index.LOCAL suits ClusteredQueries [1] where each node 
has its own "private" index and query is broadcasted to each 
individual node, and aggregated in the caller before returning the 
results.


The issue with Index.LOCAL is when a command is originated in a 
NON_OWNER (this happens in DIST caches), where there is no context 
available that prevents obtention of previous values needed certain 
commands. This makes fixing [2] complex as it requires fiddling with 
more than a couple of interceptors, and it'd require remote fetching 
of values. This extra fetch could be avoided if indexing always occurs 
in the owners.



tl;dr

The proposal is to deprecate Index.LOCAL, and map it internally to 
Index.PRIMARY_OWNER
Everything should work as before, except if someone is relying to find 
a certain entry indexed in a specific local index where the put was 
issued: the ClusteredQuery test suite does that, but I don't think 
this is a realistic use case.


Any objections?

Thanks,
Gustavo


[1] 
http://infinispan.org/docs/stable/user_guide/user_guide.html#query.clustered-query-api

[2] https://issues.jboss.org/browse/ISPN-7806


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Jenkins migration

2017-04-23 Thread Adrian Nistor
I also do not see much value in the current state of Blue Ocean. Better 
stick with the default ui.


On 04/21/2017 06:11 PM, Dan Berindei wrote:
Looks like the invalid "control characters from U+ through U+001F" 
are the  ANSI escape codes used by WildFly to color output. So we 
might be able to work around this by disabling the color output in 
WildFly in our integration tests.


OTOH I'm fine with removing the Blue Ocean plugin for now, because its 
usability is sometime worse than the default UI's. E.g. when I click 
on the build results link in GitHub, 99.999% of the time I want to see 
the test results, but Blue Ocean thinks it's much better to show me 
some circles with question marks and exclamation points instead, and 
then keep me waiting for half a minute after I click on the tests link :)


Cheers
Dan


On Fri, Apr 21, 2017 at 4:55 PM, Sebastian Laskawiec 
> wrote:


Hey!

As you probably have heard I'm migrating our TeamCity installation
[1] into Jenkins (temporarily in [2]).

So far I've managed to migrate all Infinispan builds (with pull
requests), C++/C# clients, JGroups and JGroups Kubernetes. I
decided to use the new Pipeline [3] approach for the builds and
keep the configuration along with the code (here's an example [4]).

The configuration builds /refs/pull//head/ for Pull Requests
at the moment. I will switch it back to /refs/pull//merge/ as
soon as our PR queue size is ~20.

Current pain points are:

  * Blue Ocean UI doesn't show tests. It has been reported in [5].
The workaround is to use the old Jenkins UI.
  * Windows VM doesn't start on demand (together with Vittorio we
will be working on this)

The rough plan is:

  * Apr 24th, move other 2 agents from TeamCity to Jenkins
  * Apr 24th, redirect ci.infinispan.org
 domain
  * May 4th, remove TeamCity

Please let me know if you have any questions or concerns.

Thanks,
Sebastian

[1] http://ci.infinispan.org/
[2] http://ec2-52-215-14-157.eu-west-1.compute.amazonaws.com

[3] https://jenkins.io/doc/book/pipeline/

[4]
https://github.com/infinispan/infinispan/blob/master/Jenkinsfile

[5] https://issues.jenkins-ci.org/browse/JENKINS-43751

-- 


SEBASTIANŁASKAWIEC

INFINISPAN DEVELOPER

Red HatEMEA 




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org 
https://lists.jboss.org/mailman/listinfo/infinispan-dev





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Infinispan Query API simplification

2017-04-20 Thread Adrian Nistor
If we are discussing how an Ickle query is defined, I insist we need 
identical APIs. I don't want to go over the long list of benefits of 
that. Let's just say it's trivial to implement because It's done already 
:). And now we also have a string dsl, which makes it even easier.

We do not have ATM a unified way of executing an Ickle query. As Tristan 
showed in the mail starting this thread, the incantations are slightly 
different. And I'd like to have that unified too.

The BasicCache/RemoteCache mishap is a textbook demonstration of a leaky 
abstraction, and is unrelated.  Should not stop us from unifying query 
execution. But I would probably not add a 'query' method to the cache 
interfaces and would rather go with something similar to what Sanne 
proposed in a previous email in this thread (the so called 'Alternative 
direction').

On 04/20/2017 08:35 PM, Dan Berindei wrote:
> On Thu, Apr 20, 2017 at 5:06 PM, Tristan Tarrant  wrote:
>> On 20/04/2017 15:34, Dan Berindei wrote:
 How big is the DSL API surface (which will be brought into commons)?
>>>
>>> -1 from me to add anything in commons, I don't think allowing the
>>> users to query both embedded caches and remote caches with the same
>>> code is that important. I'd rather go the opposite way and remove the
>>> BasicCache interface completely.
>>
>> Actually, we've had requests for interchangeable APIs...
>>
>> So, according to your strategy we either have each feature implemented with
>> a divergent specific embedded or remote API, or each feature has its own
>> feature-api with two separate feature-embedded and feature-remote
>> implementations. Both plans sound terrible.
>>
> Would a divergent embedded vs remote API be that bad? If the
> functionality really is different, then I'd rather have different APIs
> then force 2 different things use the same API.
>
> E.g. with BasicCache, IMO it would have been better to focus on the
> versioned conditional write operations, and remove all the
> non-versioned conditional write operations from RemoteCache. I'm sure
> we could have improved the versioned API a lot, but instead we worked
> mainly on the non-versioned API that we got from BasicCache.
>
>> Alternatively, we could go with an infinispan-api package (which Paul has
>> been advocating for a long time) which would contain the various interfaces.
>>
>>
>> Tristan
>>
>> --
>> Tristan Tarrant
>> Infinispan Lead
>> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Infinispan Query API simplification

2017-04-20 Thread Adrian Nistor
+1 for interchangeable apis
+1 for infinispan-api module

On 04/20/2017 05:06 PM, Tristan Tarrant wrote:
> On 20/04/2017 15:34, Dan Berindei wrote:
>>> How big is the DSL API surface (which will be brought into commons)?
>> -1 from me to add anything in commons, I don't think allowing the
>> users to query both embedded caches and remote caches with the same
>> code is that important. I'd rather go the opposite way and remove the
>> BasicCache interface completely.
> Actually, we've had requests for interchangeable APIs...
>
> So, according to your strategy we either have each feature implemented
> with a divergent specific embedded or remote API, or each feature has
> its own feature-api with two separate feature-embedded and
> feature-remote implementations. Both plans sound terrible.
>
> Alternatively, we could go with an infinispan-api package (which Paul
> has been advocating for a long time) which would contain the various
> interfaces.
>
> Tristan
>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Proto file for indexed and non-indexed use case?

2017-04-04 Thread Adrian Nistor
yes!

On 04/04/2017 03:48 PM, Galder Zamarreño wrote:
> The cache for the second use case is already non-indexed. Is that enough to 
> make sure the annotations are ignored?
>
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>> On 3 Apr 2017, at 18:58, Sanne Grinovero  wrote:
>>
>> Hi Galder,
>>
>> did you consider using a non-indexed cache for the second case?
>>
>> Thanks,
>> Sanne
>>
>>
>> On 3 April 2017 at 16:44, Galder Zamarreño  wrote:
>>> Hi Adrian,
>>>
>>> I had a question regarding proto files. I have a single domain of objects 
>>> that I want to use for two different use cases.
>>>
>>> In the first use case, I want the proto files to be indexed so I define the 
>>> comments and related @Indexed/@Field...etc annotations.
>>>
>>> In the second use case, I'm merely using proto files as way to achieve 
>>> compatibility mode, and I don't want any indexing to be done at all (cache 
>>> is distributed with only compatibility and protostream marshaller enabled).
>>>
>>> Do I need a separate .proto file for this second use case where I remove 
>>> the commented sections that enable indexing? Or can I use the one for the 
>>> first use case? I really want to avoid any indexing happening in the second 
>>> use case since it'd slow down things for no reason.
>>>
>>> Cheers,
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>>
>>>
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Proto file for indexed and non-indexed use case?

2017-04-03 Thread Adrian Nistor
Yup, that's how I'd do it.

On 04/03/2017 07:58 PM, Sanne Grinovero wrote:
> Hi Galder,
>
> did you consider using a non-indexed cache for the second case?
>
> Thanks,
> Sanne
>
>
> On 3 April 2017 at 16:44, Galder Zamarreño  wrote:
>> Hi Adrian,
>>
>> I had a question regarding proto files. I have a single domain of objects 
>> that I want to use for two different use cases.
>>
>> In the first use case, I want the proto files to be indexed so I define the 
>> comments and related @Indexed/@Field...etc annotations.
>>
>> In the second use case, I'm merely using proto files as way to achieve 
>> compatibility mode, and I don't want any indexing to be done at all (cache 
>> is distributed with only compatibility and protostream marshaller enabled).
>>
>> Do I need a separate .proto file for this second use case where I remove the 
>> commented sections that enable indexing? Or can I use the one for the first 
>> use case? I really want to avoid any indexing happening in the second use 
>> case since it'd slow down things for no reason.
>>
>> Cheers,
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>>
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] My update

2017-03-20 Thread Adrian Nistor
Hi all,

I've been on PTO most of last week and returned to active life only on 
Thursday. I've been hiding in the shadow exercising my writing talent 
(yuck) on documenting query and also cleaned up some jira issues:

ISPN-7300 fixing negative occurence (-) when used with boolean expressions

ISPN-6713 RemoteQueryDslConditionsTest.testIsNullNumericWithProjection1 
no longer fails after Lucene upgrade

ISPN-7002 NPE in DelegatingQuery when deployed in Wildfly 10.1.0.Final

I also spent some time with ISPN-7580, which is a proposal from Teiid 
for modifying Protostream in order to allow dynamic entities, much like 
what OGM needs too. I like the idea and understand the need for this but 
so far the solution is a bit hacky so I'm reluctant to merge that change.

Adrian

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] CI failures

2017-02-01 Thread Adrian Nistor
Thanks!

On 02/01/2017 09:31 AM, Tristan Tarrant wrote:
> I have pushed a fix for this just now:
>
>
> https://github.com/infinispan/infinispan/commit/f317f8ae5173b34cfbb3205270db56da8fa5e262
>
> Tristan
>
> On 01/02/17 08:12, Adrian Nistor wrote:
>> All builds are failing now due to this exception:
>>
>> java.lang.RuntimeException: "WFLYCTL0158: Operation handler failed:
>> java.util.MissingResourceException: Can't find resource for bundle
>> java.util.PropertyResourceBundle, key
>> datagrid-infinispan.cache-container.sites-view" at
>> org.jboss.as.subsystem.test.SubsystemTestDelegate.validateDescriptionProviders(SubsystemTestDelegate.java:581)
>> at
>> org.jboss.as.subsystem.test.SubsystemTestDelegate.access$500(SubsystemTestDelegate.java:118)
>>
>> One such failure here
>> http://ci.infinispan.org/viewLog.html?buildId=48995=buildResultsDiv=Infinispan_MasterHotspotJdk8
>>
>> I suspect the culprit to be
>> https://github.com/infinispan/infinispan/pull/4769/
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] CI failures

2017-01-31 Thread Adrian Nistor
All builds are failing now due to this exception:

java.lang.RuntimeException: "WFLYCTL0158: Operation handler failed: 
java.util.MissingResourceException: Can't find resource for bundle 
java.util.PropertyResourceBundle, key 
datagrid-infinispan.cache-container.sites-view" at 
org.jboss.as.subsystem.test.SubsystemTestDelegate.validateDescriptionProviders(SubsystemTestDelegate.java:581)
 
at 
org.jboss.as.subsystem.test.SubsystemTestDelegate.access$500(SubsystemTestDelegate.java:118)

One such failure here 
http://ci.infinispan.org/viewLog.html?buildId=48995=buildResultsDiv=Infinispan_MasterHotspotJdk8

I suspect the culprit to be 
https://github.com/infinispan/infinispan/pull/4769/

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] New blog post

2016-12-13 Thread Adrian Nistor
Thanks for reminding me that. I see you tweeted it yesterday.

On 12/12/2016 06:55 PM, Galder Zamarreño wrote:
> Remember to tweet after blogging under @infinispan account.
>
> If anyone doesn't know the credentials, I can point you in the right 
> direction ;)
>
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>> On 8 Dec 2016, at 16:50, Adrian Nistor <anis...@redhat.com> wrote:
>>
>> Hi all,
>>
>> I've just published a new blog post that briefly introduces Ickle, the query 
>> language of Infinispan [1]. This will be followed soon by another one on 
>> defining domain model schemas, configuring model indexing and analysis.
>>
>> Cheers,
>> Adrian
>>
>> [1]
>> http://blog.infinispan.org/2016/12/meet-ickle.html
>>
>>
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] New blog post

2016-12-09 Thread Adrian Nistor
Hi Radim,

We already need them and almost have them. QueryFactory.create(String 
queryString) creates a Query object that can be executed multiple times 
with different params. The Query object could be considered 'prepared'. 
In theory.

In reality this does not work right now because the internals are only 
implemented half way. Thanks for reminding me to finish it :)

Adrian

On 12/08/2016 06:20 PM, Radim Vansa wrote:
> Nice! I wonder when we'll find out that we need prepared statements, though.
>
> R.
>
> On 12/08/2016 05:11 PM, Sanne Grinovero wrote:
>> Thank you so much and congratulations Adrian! That's a huge leap forward
>>
>> -- Sanne
>>
>> On 8 December 2016 at 15:57, Adrian Nistor <anis...@redhat.com> wrote:
>>> Wrong link?
>>> Here is the correct one: http://blog.infinispan.org/2016/12/meet-ickle.html
>>>
>>>
>>> On 12/08/2016 05:50 PM, Adrian Nistor wrote:
>>>
>>> Hi all,
>>>
>>> I've just published a new blog post that briefly introduces Ickle, the query
>>> language of Infinispan [1]. This will be followed soon by another one on
>>> defining domain model schemas, configuring model indexing and analysis.
>>>
>>> Cheers,
>>> Adrian
>>>
>>> [1] http://blog.infinispan.org/2016/12/meet-ickle.html
>>>
>>>
>>>
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>
>>>
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] New blog post

2016-12-08 Thread Adrian Nistor

Hi all,

I've just published a new blog post that briefly introduces Ickle, the query 
language of Infinispan [1]. This will be followed soon by another one on 
defining domain model schemas, configuring model indexing and analysis.

Cheers,
Adrian

[1]http://blog.infinispan.org/2016/12/meet-ickle.html

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Names, names, names...

2016-11-10 Thread Adrian Nistor
Alan, thanks for your well thought out suggestion, but for simplicity's 
sake I'll go with Ickle.
That involves far less explaining, just in case anyone asks me what the 
name means :) Occam wins again!


Wait a second.. Occam could have been a pretty good option too ...

Cheers!

On 11/03/2016 06:09 PM, Alan Kash wrote:

Hi All,

I joined this mailing list few weeks back. I am not sure about the new 
query language work, but I would like to suggest one more name option 
- *Cantor* -**In the spirit of Georg 
Cantor (https://en.wikipedia.org/wiki/Georg_Cantor) who put forward 
the idea of infinite.


Thanks,
Alan





On Thu, Nov 3, 2016 at 6:49 AM, Adrian Nistor <anis...@redhat.com 
<mailto:anis...@redhat.com>> wrote:


Hi all,

So far we have: IQL, Ickle, QuIL, YAQL, LQID, INAQL, INQL, WTF,
NSFW, KMFDM.

I do not like acronyms that are not straight forward. They should
really be the initials of the simplest description of what the
thing is. In this case the acceptable one would be : Infinispan
Query Language => IQL. Unfortunately this seems to mean different
things to different people. So let's forget about acronyms and go
with ... *Ickle* [1][2][3], proposed by Tristan.

A very respectable bearded fellow once said: "There are only two
hard things in Computer Science: cache invalidation and naming
things." So let's use this as an excuse and dare to (maybe) settle
on a less than perfect name :). At least it is a damn cute one.

Thanks Tristan for the name!
Cheers!


[1] http://www.merriam-webster.com/dictionary/ickle
<http://www.merriam-webster.com/dictionary/ickle>
[2] http://www.dictionary.com/browse/ickle
<http://www.dictionary.com/browse/ickle>
[3] http://www.urbandictionary.com/define.php?term=ickle
<http://www.urbandictionary.com/define.php?term=ickle>



On 10/28/2016 06:49 PM, Emmanuel Bernard wrote:

I like Ickle and LQID personally. And Adrian is way too reasonable on a
name thread ;)

On Mon 16-10-17  9:07, Tristan Tarrant wrote:

Hi all,

something trivial and fun for a Monday morning.

I've just issued a PR [1] to update the codename for Infinispan 9.0.

And while we're at it, let's give a name to the new query language that
Adrian (and Emmanuel) have designed. We already have a number of
suggestions (which I summarize below) but please feel free to add your
own. Please vote.

IQL (Infinispan Query Language, already used by others).
Ickle (Alternate pronunciation of above, also means "small")
LQID (Language for Querying Infinispan Datagrids)
QuIL ("Query Infinispan" Language)

Tristan

[1]https://github.com/infinispan/infinispan/pull/4617
<https://github.com/infinispan/infinispan/pull/4617>
-- 
Tristan Tarrant

Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
<mailto:infinispan-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
<https://lists.jboss.org/mailman/listinfo/infinispan-dev>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
<mailto:infinispan-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
<https://lists.jboss.org/mailman/listinfo/infinispan-dev>


___ infinispan-dev
mailing list infinispan-dev@lists.jboss.org
<mailto:infinispan-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
<https://lists.jboss.org/mailman/listinfo/infinispan-dev> 


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Names, names, names...

2016-11-03 Thread Adrian Nistor

Hi all,

So far we have: IQL, Ickle, QuIL, YAQL, LQID, INAQL, INQL, WTF, NSFW, 
KMFDM.


I do not like acronyms that are not straight forward. They should really 
be the initials of the simplest description of what the thing is. In 
this case the acceptable one would be : Infinispan Query Language => 
IQL. Unfortunately this seems to mean different things to different 
people. So let's forget about acronyms and go with ... *Ickle* 
[1][2][3], proposed by Tristan.


A very respectable bearded fellow once said: "There are only two hard 
things in Computer Science: cache invalidation and naming things." So 
let's use this as an excuse and dare to (maybe) settle on a less than 
perfect name :). At least it is a damn cute one.


Thanks Tristan for the name!
Cheers!


[1] http://www.merriam-webster.com/dictionary/ickle
[2] http://www.dictionary.com/browse/ickle
[3] http://www.urbandictionary.com/define.php?term=ickle


On 10/28/2016 06:49 PM, Emmanuel Bernard wrote:

I like Ickle and LQID personally. And Adrian is way too reasonable on a
name thread ;)

On Mon 16-10-17  9:07, Tristan Tarrant wrote:

Hi all,

something trivial and fun for a Monday morning.

I've just issued a PR [1] to update the codename for Infinispan 9.0.

And while we're at it, let's give a name to the new query language that
Adrian (and Emmanuel) have designed. We already have a number of
suggestions (which I summarize below) but please feel free to add your
own. Please vote.

IQL (Infinispan Query Language, already used by others).
Ickle (Alternate pronunciation of above, also means "small")
LQID (Language for Querying Infinispan Datagrids)
QuIL ("Query Infinispan" Language)

Tristan

[1] https://github.com/infinispan/infinispan/pull/4617
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Names, names, names...

2016-11-03 Thread Adrian Nistor
-1. yaql is already taken (multiple times) :)

On 10/28/2016 07:18 PM, Thomas Qvarnström Privat wrote:
> How about YAQL, Yet Another Query Language?
>
>
>> On 28 Oct 2016, at 17:49, Emmanuel Bernard  wrote:
>>
>> I like Ickle and LQID personally. And Adrian is way too reasonable on a
>> name thread ;)
>>
>> On Mon 16-10-17  9:07, Tristan Tarrant wrote:
>>> Hi all,
>>>
>>> something trivial and fun for a Monday morning.
>>>
>>> I've just issued a PR [1] to update the codename for Infinispan 9.0.
>>>
>>> And while we're at it, let's give a name to the new query language that
>>> Adrian (and Emmanuel) have designed. We already have a number of
>>> suggestions (which I summarize below) but please feel free to add your
>>> own. Please vote.
>>>
>>> IQL (Infinispan Query Language, already used by others).
>>> Ickle (Alternate pronunciation of above, also means "small")
>>> LQID (Language for Querying Infinispan Datagrids)
>>> QuIL ("Query Infinispan" Language)
>>>
>>> Tristan
>>>
>>> [1] https://github.com/infinispan/infinispan/pull/4617
>>> -- 
>>> Tristan Tarrant
>>> Infinispan Lead
>>> JBoss, a division of Red Hat
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Names, names, names...

2016-10-17 Thread Adrian Nistor
Is IQL a reasonable name? I'd go for this one.

On 10/17/2016 06:48 PM, Sanne Grinovero wrote:
> On 17 October 2016 at 17:38, Valerio Schiavoni
>  wrote:
>> On Mon, Oct 17, 2016 at 5:30 PM, ugol  wrote:
>>> inql sounds weird in Italian :)
>>>
>>>
> I like that last one, but I don't think the article should be part of
> the acronym, so I'd go with INQL :)
 +1 even better indeed
>>
>> -1 for INQL, as Ugo said, the italian crowd won't be pleased ... ;-)
> HaHa! Thanks Valerio and Ugo; I should have known better too since
> Italian is my first language but for some reason when in this context
> I'm thinking in English.
>
> Ok so I'd suggest to discard INQL and I revert to my previous
> suggestion "INAQL" however as mentioned I also like IQL and LQID.
> Tristan, sorry for the mess :) I'd say we're all giving (hopefully)
> useful input, but Adrian should get to decide how to call his baby.
>
> Thanks,
> Sanne
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Optional in listener events

2016-08-29 Thread Adrian Nistor
That is bad and unfortunate because it breaks continuous query and also 
makes CEP impossible to implement.
As I've said, the previous value is absolutely necessary for these use 
cases. If the notification cannot provide it then (in theory) we have 
the alternative to make the CQ or CEP engine store the previous value. 
But that is absurd because it would duplicate the entire contents of the 
cache inside the CQ/CEP engine. So no.

And besides the already highlighted cases where this is broken we've 
recently added new ones. Some of the functional map write operations 
also do not provide the previous value. Actually they do not notify the 
classic @org.infinispan.notifications. Listener at all, in favor of the 
new org.infinispan.commons.api.functional.Listeners.

We badly need to unify these listener interfaces and make sure we always 
provide the previous value.


On 08/29/2016 02:33 PM, Radim Vansa wrote:
> On 08/24/2016 04:32 PM, Dan Berindei wrote:
>> On Wed, Aug 24, 2016 at 4:14 PM, Adrian Nistor <anis...@redhat.com> wrote:
>>> I remember there was some discussion to refactor listeners to drop the
>>> concept of pre-events and have the previous value available in the
>>> post-event (where applicable). I do not remember what decision was made
>>> about this. But if we do it, that would be already a big backwards
>>> incompatible change, so modifying some defaults regarding the behavior of
>>> the Listener annotation is just mildly disturbing. We'll have to
>>> apologize/document this anyway and also provide migration advice.
>>>
>> That discussion died out because we couldn't decide whether we really
>> need to maintain the current listeners or we could switch to
>> supporting only JCache and FunctionalMap listener APIs.
>>
>> FWIW I'm all for modernizing the core listeners, removing the pre
>> events, and allowing listeners to receive the previous value in the
>> post event. I'd also change the API to be interface-based instead of
>> annotation-based.
>>
>>> On 08/22/2016 09:15 PM, William Burns wrote:
>>>
>>> I like the idea of having a variable to set on the listener annotation.
>>> This way we can know for sure if we need to force previous values for some
>>> listeners and not for others.
>>>
>>> It seems the default should be to force the previous value to be more inline
>>> with the current behavior, but I fear no one will use the opposite in this
>>> case though.  What do you guys think?
>> Actually, the current behaviour is *not* to force the previous value.
>> If you have the entry in the data container, yes, you'll see it in the
>> listener, but if the entry is in a store, you won't. Clustered
>> listeners do get the previous value even if it's remote, but not if
>> the entry is passivated.
> So prev values are not working as doc states (silently returning wrong
> values) in case that:
> - topology changes (command retry)
> - persistence is used
>
> These are quite non-obvious "if"s for users :-/ I'd call listeners a
> non-reliable feature.
>
>>> On Mon, Aug 22, 2016 at 4:31 AM Adrian Nistor <anis...@redhat.com> wrote:
>>>> Hi Radim,
>>>>
>>>> Continuous query is built on top of these listeners. CQ _always_ needs
>>>> the previous value and it is very convenient in this case that the
>>>> command is forced to load the previous value. I imagine there may be
>>>> other use cases where we cannot live without the prev value.
>> Unfortunately, if a command is retried in a non-tx cache
>> (event.isCommandRetried() == true), the listener may receive the new
>> value as the previous value. So CQ needs to support this case, or
>> we'll have to finally fix it in core. I'd mention
>> versioning/tombstones, but I fear Sanne is going to read this and
>> derail the thread ;)
>>
>>>> I think the listener should be able to state if it needs the prev value
>>>> at registration time. Maybe add a new attribute in the Listener
>>>> annotation? Similar to how we handled Observation.
>>>>
>> Actually, with the current API, the only way to get the previous value
>> is with the pre event, so we could interpret @Listener(observation =
>> POST) as a sign that the listener doesn't need the previous value.
> But pre events are also unreliable as they just notify that something
> may or may not happen in the future.
>
>>>> Adrian
>>>>
>>>> On 08/19/2016 11:34 PM, Radim Vansa wrote:
>>>>> Hi,
>>>>>
>>>>> as I am trying to simplify current entry wra

Re: [infinispan-dev] Optional in listener events

2016-08-24 Thread Adrian Nistor
I remember there was some discussion to refactor listeners to drop the 
concept of pre-events and have the previous value available in the 
post-event (where applicable). I do not remember what decision was made 
about this. But if we do it, that would be already a big backwards 
incompatible change, so modifying some defaults regarding the behavior 
of the Listener annotation is just mildly disturbing. We'll have to 
apologize/document this anyway and also provide migration advice.


On 08/22/2016 09:15 PM, William Burns wrote:
I like the idea of having a variable to set on the listener 
annotation.  This way we can know for sure if we need to force 
previous values for some listeners and not for others.


It seems the default should be to force the previous value to be more 
inline with the current behavior, but I fear no one will use the 
opposite in this case though.  What do you guys think?


On Mon, Aug 22, 2016 at 4:31 AM Adrian Nistor <anis...@redhat.com 
<mailto:anis...@redhat.com>> wrote:


Hi Radim,

Continuous query is built on top of these listeners. CQ _always_ needs
the previous value and it is very convenient in this case that the
command is forced to load the previous value. I imagine there may be
other use cases where we cannot live without the prev value.

I think the listener should be able to state if it needs the prev
value
at registration time. Maybe add a new attribute in the Listener
annotation? Similar to how we handled Observation.

Adrian

On 08/19/2016 11:34 PM, Radim Vansa wrote:
> Hi,
>
> as I am trying to simplify current entry wrapping and
distribution code,
> I often find that listeners can get wrong previous value in the
event,
> and it sometimes forces the command to load the value even if it
is not
> needed for the command.
>
> I am wondering if we should change the previous value in events to
> Optional - we can usually at least detect that we cannot provide a
> reliable value (e.g. after retry due to topology change, or
because the
> command did not bothered to load the previous value from cache
loader)
> and return empty Optional.
>
> WDYT?
>
> Radim
>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Optional in listener events

2016-08-24 Thread Adrian Nistor
No. CQ absolutely needs the previous value.


On 08/24/2016 12:54 PM, Pedro Ruivo wrote:
> If we can drop the pre-events, it would be possible to skip the
> loading/wrapping of the previous value. The method DataContainer.put()
> could return the previous value/metadata that could be used to trigger
> the post-event.
>
> However, it has a problem that I haven't solved it yet: if the value is
> not in memory it will not be loaded (cache-stores / rebalance in progress).
>
> Would it work for CQ?
>
> Cheers,
> Pedro
>
> On 22-08-2016 19:15, William Burns wrote:
>> I like the idea of having a variable to set on the listener annotation.
>> This way we can know for sure if we need to force previous values for
>> some listeners and not for others.
>>
>> It seems the default should be to force the previous value to be more
>> inline with the current behavior, but I fear no one will use the
>> opposite in this case though.  What do you guys think?
>>
>> On Mon, Aug 22, 2016 at 4:31 AM Adrian Nistor <anis...@redhat.com
>> <mailto:anis...@redhat.com>> wrote:
>>
>>  Hi Radim,
>>
>>  Continuous query is built on top of these listeners. CQ _always_ needs
>>  the previous value and it is very convenient in this case that the
>>  command is forced to load the previous value. I imagine there may be
>>  other use cases where we cannot live without the prev value.
>>
>>  I think the listener should be able to state if it needs the prev value
>>  at registration time. Maybe add a new attribute in the Listener
>>  annotation? Similar to how we handled Observation.
>>
>>  Adrian
>>
>>  On 08/19/2016 11:34 PM, Radim Vansa wrote:
>>  > Hi,
>>  >
>>  > as I am trying to simplify current entry wrapping and distribution
>>  code,
>>  > I often find that listeners can get wrong previous value in the event,
>>  > and it sometimes forces the command to load the value even if it
>>  is not
>>  > needed for the command.
>>  >
>>  > I am wondering if we should change the previous value in events to
>>  > Optional - we can usually at least detect that we cannot provide a
>>  > reliable value (e.g. after retry due to topology change, or
>>  because the
>>  > command did not bothered to load the previous value from cache loader)
>>  > and return empty Optional.
>>  >
>>  > WDYT?
>>  >
>>  > Radim
>>  >
>>
>>  ___
>>  infinispan-dev mailing list
>>  infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>>  https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Optional in listener events

2016-08-22 Thread Adrian Nistor
Hi Radim,

Continuous query is built on top of these listeners. CQ _always_ needs 
the previous value and it is very convenient in this case that the 
command is forced to load the previous value. I imagine there may be 
other use cases where we cannot live without the prev value.

I think the listener should be able to state if it needs the prev value 
at registration time. Maybe add a new attribute in the Listener 
annotation? Similar to how we handled Observation.

Adrian

On 08/19/2016 11:34 PM, Radim Vansa wrote:
> Hi,
>
> as I am trying to simplify current entry wrapping and distribution code,
> I often find that listeners can get wrong previous value in the event,
> and it sometimes forces the command to load the value even if it is not
> needed for the command.
>
> I am wondering if we should change the previous value in events to
> Optional - we can usually at least detect that we cannot provide a
> reliable value (e.g. after retry due to topology change, or because the
> command did not bothered to load the previous value from cache loader)
> and return empty Optional.
>
> WDYT?
>
> Radim
>

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Infinispan and change data capture

2016-07-11 Thread Adrian Nistor

Hi Randall,

Infinispan supports both push and pull access models. The push model is 
supported by events (and listeners), which are cluster wide and are 
available in both library and remote mode (hotrod). The notification 
system is pretty advanced as there is a filtering mechanism available 
that can use a hand coded filter / converter or one specified in jpql 
(experimental atm). Getting a snapshot of the initial data is also 
possible. But infinispan does not produce a transaction log to be used 
for determining all changes that happened since a previous connection 
time, so you'll always have to get a new full snapshot when re-connecting.


So if Infinispan is the data store I would base the Debezium connector 
implementation on Infinispan's event notification system. Not sure about 
the other use case though.


Adrian

On 07/09/2016 04:38 PM, Randall Hauch wrote:
The Debezium project [1] is working on building change data capture 
connectors for a variety of databases. MySQL is available now, MongoDB 
will be soon, and PostgreSQL and Oracle are next on our roadmap.


One way in which Debezium and Infinispan can be used together is when 
Infinispan is being used as a cache for data stored in a database. In 
this case, Debezium can capture the changes to the database and 
produce a stream of events; a separate process can consume these 
change and evict entries from an Infinispan cache.


If Infinispan is to be used as a data store, then it would be useful 
for Debezium to be able to capture those changes so other 
apps/services can consume the changes. First of all, does this make 
sense? Secondly, if it does, then Debezium would need an Infinispan 
connector, and it’s not clear to me how that connector might capture 
the changes from Infinispan.


Debezium typically monitors the log of transactions/changes that are 
committed to a database. Of course how this works varies for each type 
of database. For example, MySQL internally produces a transaction log 
that contains information about every committed row change, and MySQL 
ensures that every committed change is included and that non-committed 
changes are excluded. The MySQL mechanism is actually part of the 
replication mechanism, so slaves update their internal state by 
reading the master’s log. The Debezium MySQL connector [2] simply 
reads the same log.


Infinispan has several mechanisms that may be useful:

  * Interceptors - See [3]. This seems pretty straightforward and IIUC
provides access to all internal operations. However, it’s not
clear to me whether a single interceptor will see all the changes
in a cluster (perhaps in local and replicated modes) or only those
changes that happen on that particular node (in distributed mode).
It’s also not clear whether this interceptor is called within the
context of the cache’s transaction, so if a failure happens just
at the wrong time whether a change might be made to the cache but
is not seen by the interceptor (or vice versa).
  * Cross-site replication - See [4][5]. A potential advantage of this
mechanism appears to be that it is defined (more) globally, and it
appears to function if the remote backup comes back online after
being offline for a period of time.
  * State transfer - is it possible to participate as a non-active
member of the cluster, and to effectively read all state transfer
activities that occur within the cluster?
  * Cache store - tie into the cache store mechanism, perhaps by
wrapping an existing cache store and sitting between the cache and
the cache store
  * Monitor the cache store - don’t monitor Infinispan at all, and
instead monitor the store in which Infinispan is storing entries.
(This is probably the least attractive, since some stores can’t be
monitored, or because the store is persisting an opaque binary value.)


Are there other mechanism that might be used?

There are a couple of important requirements for change data capture 
to be able to work correctly:


 1. Upon initial connection, the CDC connector must be able to obtain
a snapshot of all existing data, followed by seeing all changes to
data that may have occurred since the snapshot was started. If the
connector is stopped/fails, upon restart it needs to be able to
reconnect and either see all changes that occurred since it last
was capturing changes, or perform a snapshot. (Performing a
snapshot upon restart is very inefficient and undesirable.) This
works as follows: the CDC connector only records the “offset” in
the source’s sequence of events; what this “offset” entails
depends on the source. Upon restart, the connector can use this
offset information to coordinate with the source where it wants to
start reading. (In MySQL and PostgreSQL, every event includes the
filename of the log and position in that file. MongoDB includes in
each event the monotonically increasing 

Re: [infinispan-dev] Removal of auto-detection of indexed entities

2016-03-11 Thread Adrian Nistor
BTW, these changes were added a bit late in the Infinispan 8.2 
development so I had to keep them at the minimum, otherwise I would have 
added lazy search factory initialization already.
But since we already have some strange random mystery deadlocks at 
server start (which do not seem related, until I'm proven wrong :)) I 
was reluctant to do it.

On 03/11/2016 04:02 PM, Adrian Nistor wrote:
> The laziness could be tried first in Infinispan, then if it does not
> work we might need some help from hibernate-search. That might save
> hibernate-search from adding another hack for infinispan.
>
> Thanks!
>
> On 03/11/2016 03:13 PM, Sanne Grinovero wrote:
>> On 11 March 2016 at 13:00, Adrian Nistor <anis...@redhat.com> wrote:
>>> Hi Sanne,
>>>
>>> yes, but there is a small unsolved issue ATM and the implementation
>>> needs more polishing before we get to Ispn 9.0; infinispan and probably
>>> search too will need a bit more tweaking.
>>>
>>> Current approach is that the search factory is first created with 0
>>> classes and then after initialization of the cache and the index manager
>>> has completed we reconfigure the search factory again, exactly once, and
>>> all known indexed classes are added at once. We cannot add the indexed
>>> classes at the initial creation time because that would cause infinispan
>>> directory to try to create the needed caches (for locking, metadata,...)
>>> while the user's cache has not even finished initializing. These leads
>>> to deadlock in our component registry. This approach if far more
>>> efficient and less error prone than the previous but still not ideal.
>> Sounds like this would be solved by initializing the SearchIntegrator
>> lazily (only once) ?
>>
>> Let us know what the Search team could do, it would be very nice for
>> us to be able to
>> drop this requirement as it makes any other change we're working on more 
>> complex
>> to implement.
>>
>> Thanks!
>> Sanne
>>
>>> Adrian
>>>
>>> On 03/11/2016 02:07 PM, Sanne Grinovero wrote:
>>>> When booting Infinispan Server v. 8.2.0.Final I see this message being 
>>>> logged:
>>>>
>>>> ISPN000403: No indexable classes were defined for this indexed cache;
>>>> switching to autodetection (support for autodetection will be removed
>>>> in Infinispan 9.0).
>>>>
>>>> Does it mean we can finally remove the infamous, super-complex to
>>>> maintain and generally hated "feature" in Hibernate Search to
>>>> dynamically add new entities on the fly?
>>>> ___
>>>> infinispan-dev mailing list
>>>> infinispan-dev@lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Removal of auto-detection of indexed entities

2016-03-11 Thread Adrian Nistor
The laziness could be tried first in Infinispan, then if it does not 
work we might need some help from hibernate-search. That might save 
hibernate-search from adding another hack for infinispan.

Thanks!

On 03/11/2016 03:13 PM, Sanne Grinovero wrote:
> On 11 March 2016 at 13:00, Adrian Nistor <anis...@redhat.com> wrote:
>> Hi Sanne,
>>
>> yes, but there is a small unsolved issue ATM and the implementation
>> needs more polishing before we get to Ispn 9.0; infinispan and probably
>> search too will need a bit more tweaking.
>>
>> Current approach is that the search factory is first created with 0
>> classes and then after initialization of the cache and the index manager
>> has completed we reconfigure the search factory again, exactly once, and
>> all known indexed classes are added at once. We cannot add the indexed
>> classes at the initial creation time because that would cause infinispan
>> directory to try to create the needed caches (for locking, metadata,...)
>> while the user's cache has not even finished initializing. These leads
>> to deadlock in our component registry. This approach if far more
>> efficient and less error prone than the previous but still not ideal.
> Sounds like this would be solved by initializing the SearchIntegrator
> lazily (only once) ?
>
> Let us know what the Search team could do, it would be very nice for
> us to be able to
> drop this requirement as it makes any other change we're working on more 
> complex
> to implement.
>
> Thanks!
> Sanne
>
>> Adrian
>>
>> On 03/11/2016 02:07 PM, Sanne Grinovero wrote:
>>> When booting Infinispan Server v. 8.2.0.Final I see this message being 
>>> logged:
>>>
>>> ISPN000403: No indexable classes were defined for this indexed cache;
>>> switching to autodetection (support for autodetection will be removed
>>> in Infinispan 9.0).
>>>
>>> Does it mean we can finally remove the infamous, super-complex to
>>> maintain and generally hated "feature" in Hibernate Search to
>>> dynamically add new entities on the fly?
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Removal of auto-detection of indexed entities

2016-03-11 Thread Adrian Nistor
Hi Sanne,

yes, but there is a small unsolved issue ATM and the implementation 
needs more polishing before we get to Ispn 9.0; infinispan and probably 
search too will need a bit more tweaking.

Current approach is that the search factory is first created with 0 
classes and then after initialization of the cache and the index manager 
has completed we reconfigure the search factory again, exactly once, and 
all known indexed classes are added at once. We cannot add the indexed 
classes at the initial creation time because that would cause infinispan 
directory to try to create the needed caches (for locking, metadata,...) 
while the user's cache has not even finished initializing. These leads 
to deadlock in our component registry. This approach if far more 
efficient and less error prone than the previous but still not ideal.

Adrian

On 03/11/2016 02:07 PM, Sanne Grinovero wrote:
> When booting Infinispan Server v. 8.2.0.Final I see this message being logged:
>
> ISPN000403: No indexable classes were defined for this indexed cache;
> switching to autodetection (support for autodetection will be removed
> in Infinispan 9.0).
>
> Does it mean we can finally remove the infamous, super-complex to
> maintain and generally hated "feature" in Hibernate Search to
> dynamically add new entities on the fly?
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Hot Rod encoding

2016-02-03 Thread Adrian Nistor
Hi all,

Experimenting with custom marshallers is possible right now. People just 
loose query support doing so. Freedom has its cost ... (kidding)

There are a few simple things we can do right now to better support at 
least two client categories (and not loose query support) without much 
changes to current state of the code:
1. provide json support on top of protobuf, via a canonical json <-> 
protobuf translation. We've been talking about this since ispn 7.x. 
Meanwhile protobuf3 is out and provides it out of the box, as Gustavo 
pointed out, so there is even less effort. This would be useful for both 
REST and hot rod.
2. better support for remote query for java clients using 
jbossmarshalling (in compat mode only). Would require hibernate-search 
annotated entities be available on server too, but would spare the user 
from touching protobuf if he's not interested in interop. The only 
reason this does not work already is because it was not among the 
initial set of requirements when designing remote query. 2.5 years later 
it is clearly a must.

Infinispan 9 is not too far, so I propose to implement 1 and 2 as a 
first step, then see what is the right time frame for the more involved 
step of decoupling query module's source of metadata form the protobuf 
schema/marshaller. I think this refactoring deserves to be done even for 
the sake of elegance and maintainability, if not for plugability.

Re Protostream, it is just plain Protobuf folks! It's in no way a 
customly-modified-encoding-loosely-based-on-protobuf. The only apparent 
twist is our usage of Protobuf, which btw also follows Google's 
recommendations (see [1]). All keys and values are wrapped into a struct 
like this [2] (actually it is a union when looking closer). And for good 
reason. Adding that metadata as a separate header at hot rod protocol 
level would be another possibility, but really, having it stored in 
cache with the key/value is much more appropriate. This type metadata 
(the type name or type id) is needed for indexing too. And could also be 
used by some smart cache stores to make some sense out of that blob and 
map it to database columns or suchlike. And would also be needed by non 
HR clients, like REST. I can go on .. :)  The only downside I see is 
space. Our caches are not homogeneous, unfortunately [let's not debate 
now why], so the type info needs to be stored for each single key, value 
instead of storing it once per cache.

Adrian

[1] 
https://developers.google.com/protocol-buffers/docs/techniques#self-description
[2] 
https://raw.githubusercontent.com/anistor/protostream/master/core/src/main/resources/org/infinispan/protostream/message-wrapping.proto

On 02/03/2016 01:34 PM, Sanne Grinovero wrote:
> It sounds like a good idea, almost like a natural evolution, but to
> play devil's advocate I'll try to find some drawbacks for such a
> decision.
>
> One negative argument is overall complexity: there are many points in
> code in which one needs to consider that the encoding might be
> "something else". This isn't a new problem as we already support two
> modes, but we've soon how much more difficult it makes things and this
> is making things even a bit more complex.
>
> Another point which I don't like much is that people will have to
> reconfigure the server depending on specific needs of the clients; if
> we go this way there should be a way in the Hot Rod protocol to
> "negotiate encoding", so to avoid such configuration tasks for end
> users (at least when there's no need to actually deploy dependencies
> on the server to handle some custom encoder..).
>
> The other problem I see is that this severely complicates
> interoperability between different clients. Suppose two applications
> are being developed to use the Infinispan Server and decide to use two
> different encoders, I suspect that at some point they'll regret it
> when needing one to access some data from the other... not suggesting
> that this should be a good practice, but still it would be very
> inconvenient.
>
> Finally, tooling. We'll eventually need to work on better tooling and
> supporting multiple encoders would spread efforts thinly.
>
> That said, I'm not against the idea of toying with such options and
> make other encoders as an *experimental* feature: protobuf is the most
> suited choice but forever is a long time and we should not hinder
> research on better alternatives.
>
> Thanks,
> Sanne
>
>
>
>
> On 3 February 2016 at 10:38, Gustavo Fernandes  wrote:
>>
>> On Mon, Jan 25, 2016 at 2:03 PM, Galder Zamarreño  wrote:
>>> Hi all,
>>>
>>> As I write the Javascript client for Hot Rod, and Vittorio writes the C++
>>> client, the question the encoding of the byte arrays has popped up.
>>>
>>> The reason why encoding matters is mainly because of compatibility mode.
>>> How does a Hot Rod client know how it should transform something a REST
>>> client set?
>>>
>>> To be able to answer this 

Re: [infinispan-dev] Weekly IRC meeting logs 2016-01-25

2016-01-26 Thread Adrian Nistor

Hi All,

Here's my update. Last week I worked on query related docs; first PR is 
here [1]. This one is just about aggregations but there is more coming 
on continuous query and dsl based event filters. I felt blogging about 
it is completely inappropriate at this stage since docs are very scarce.


Another thing I'm working on is fixing some small design errors in the 
query DSL (org.infinispan.query.dsl.*) to have better type safety and 
ensure some aspects of query correctness right from construction time, 
PR coming soon. These need to be fixed in 8.2, one of them is [2], 
please have a look.


This week I'm planning to fill all the blanks in the query user guide 
and then write a short blog about the new additions.


Cheers!

[1] https://github.com/infinispan/infinispan/pull/3954
[2] https://github.com/infinispan/infinispan/pull/3936

On 01/25/2016 05:52 PM, Tristan Tarrant wrote:

Hi all,

get the meeting logs from here:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2016/infinispan.2016-01-25-15.02.log.html

Cheers

Tristan


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Infinispan 8.1.0.Final is released!

2015-12-08 Thread Adrian Nistor
Dear community,

I'm pleased to announce Infinispan 8.1.0.Final was finally released. 
Read more about it on our blog here: http://goo.gl/NSNNF5

Cheers,
Adrian
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 8.0.0.Beta2 released

2015-07-24 Thread Adrian Nistor

Dear community

Infinispan 8.0.0.Beta2 is now available! Further details in the blog post:
http://blog.infinispan.org/2015/07/infinispan-800beta2.html

Cheers
Adrian

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Infinispan 7.2.0.Alpha1 released

2015-02-25 Thread Adrian Nistor
Dear Infinispan community,

Infinispan 7.2.0.Alpha1 is now available!

Read more at: 
http://blog.infinispan.org/2015/02/infinispan-720alpha1-released.html

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Experiment: Affinity Tagging

2015-01-20 Thread Adrian Nistor
None of the existing Hash implementations can, but this new one will be 
special. It could have access to the config (and CH) of the user's cache 
so it will know the number of segments. The index cache will have to use 
the same type of CH as the data cache in order to keep ownership in sync 
and the Hash implementation will be the special delegating Hash.


There is a twist though, the above only works with SyncConsistentHash. 
Bacause when two caches with identical topology use 
DefaultConsistentHash they could still not be in sync in terms of key 
ownership. Only SyncConsistentHash ensures that.


Knowledge of how CH currently maps hashcodes to segments is assumed 
already. I've spotted at least 3 places in code where it happens, so it 
is time to document it or move this responsibility to the Hash interface 
as you suggest to make it really pluggable.


Adrian

On 01/20/2015 03:32 PM, Dan Berindei wrote:
Adrian, I don't think that will work. The Hash doesn't know the number 
of segments so it can't tell where a particular key will land - even 
assuming knowledge about how the ConsistentHash will map hash codes to 
segments.


However, I'm all for replacing the current Hash interface with another 
interface that maps keys directly to segments.


Cheers
Dan


On Tue, Jan 20, 2015 at 4:08 AM, Adrian Nistor anis...@redhat.com 
mailto:anis...@redhat.com wrote:


Hi Sanne,

An alternative approach would be to implement an
org.infinispan.commons.hash.Hash which delegates to the stock
implementation for all keys except those that need to be assigned to a
specific segment. It should return the desired segment for those.

Adrian


On 01/20/2015 02:48 AM, Sanne Grinovero wrote:
 Hi all,

 I'm playing with an idea for some internal components to be able to
 tag the key for an entry to be stored into Infinispan in a very
 specific segment of the CH.

 Conceptually the plan is easy to understand by looking at this
patch:



https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f

 Hacking the change into ReplicatedConsistentHash is quite barbaric,
 please bear with me as I couldn't figure a better way to be able to
 experiment with this. I'll probably want to extend this class, but
 then I'm not sure how to plug it in?


You would need to create your own ConsistentHashFactory, possibly 
extending ReplicatedConsistentHashFactory. You can then plug the 
factory in with


configurationBuilder.clustering().hash().consistentHashFactory(yourFactory)

However, this isn't a really good idea, because then you need a 
different implementation for distributed mode, and then another 
implementation for topology-aware clusters (with rack/machine/site 
ids). And your users would also need to select the proper factory for 
each cache.



 What would you all think of such a tagging mechanism?

 # Why I didn't use the KeyAffinityService
 - I need to use my own keys, not the meaningless stuff produced
by the service
 - the extensive usage of Random in there doesn't seem suited for a
 performance critical path


You can plug in your own KeyGenerator to generate keys, and maybe 
replace the Random with a static/thread-local counter.



 # Why I didn't use the Grouping API
 - I need to pick the specific storage segment, not just
co-locate with
 a different key



This is actually a drawback of the KeyAffinityService more than 
Grouping. With grouping, you can actually follow the 
KeyAffinityService strategy and generate random strings until you get 
one in the proper segment, and then tag all your keys with that exact 
string.



 The general goal is to make it possible to tag all entries of an
 index, and have an independent index for each segment of the CH. So
 the resulting effect would be, that when a primary owner for any
key K
 is making an update, and this triggers an index update, that
update is
   A) going to happen on the same node - no need to forwarding to a
 master indexing node
   B) each such writes on the index happen on the same node which is
 primary owner for all the written entries of the index.

 There are two additional nice consequences:
   - there would be no need to perform a reliable master election:
 ownership singleton is already guaranteed by Infinispan's essential
 logic, so it would reuse that
   - the propagation of writes on the index from the primary owner
 (which is the local node by definition) to backup owners could use
 REPL_ASYNC for most practical use cases.

 So net result is that the overhead for indexing is reduced to 0
(ZERO)
 blocking RPCs if the async repl is acceptable, or to only one
blocking
 roundtrip if very strict consistency is required.


Sounds very interesting, but I think there may be a problem with your

Re: [infinispan-dev] Experiment: Affinity Tagging

2015-01-19 Thread Adrian Nistor
Hi Sanne,

An alternative approach would be to implement an 
org.infinispan.commons.hash.Hash which delegates to the stock 
implementation for all keys except those that need to be assigned to a 
specific segment. It should return the desired segment for those.

Adrian


On 01/20/2015 02:48 AM, Sanne Grinovero wrote:
 Hi all,

 I'm playing with an idea for some internal components to be able to
 tag the key for an entry to be stored into Infinispan in a very
 specific segment of the CH.

 Conceptually the plan is easy to understand by looking at this patch:

 https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f

 Hacking the change into ReplicatedConsistentHash is quite barbaric,
 please bear with me as I couldn't figure a better way to be able to
 experiment with this. I'll probably want to extend this class, but
 then I'm not sure how to plug it in?

 What would you all think of such a tagging mechanism?

 # Why I didn't use the KeyAffinityService
 - I need to use my own keys, not the meaningless stuff produced by the service
 - the extensive usage of Random in there doesn't seem suited for a
 performance critical path

 # Why I didn't use the Grouping API
 - I need to pick the specific storage segment, not just co-locate with
 a different key


 The general goal is to make it possible to tag all entries of an
 index, and have an independent index for each segment of the CH. So
 the resulting effect would be, that when a primary owner for any key K
 is making an update, and this triggers an index update, that update is
   A) going to happen on the same node - no need to forwarding to a
 master indexing node
   B) each such writes on the index happen on the same node which is
 primary owner for all the written entries of the index.

 There are two additional nice consequences:
   - there would be no need to perform a reliable master election:
 ownership singleton is already guaranteed by Infinispan's essential
 logic, so it would reuse that
   - the propagation of writes on the index from the primary owner
 (which is the local node by definition) to backup owners could use
 REPL_ASYNC for most practical use cases.

 So net result is that the overhead for indexing is reduced to 0 (ZERO)
 blocking RPCs if the async repl is acceptable, or to only one blocking
 roundtrip if very strict consistency is required.

 Thanks,
 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Failed Hot Rod tests.. since several weeks

2015-01-09 Thread Adrian Nistor
Hi Sanne,

The failure is avoided by modifying the setup to wait until all lucene 
index related caches are started on all nodes and initial state transfer 
was performed (as in 
https://github.com/infinispan/infinispan/pull/3114/files).

But this 'fix' may indicate a problem in infinispan-lucene-directory. 
Maybe Gustavo can have a look?

Adrian


On 01/08/2015 02:35 PM, Sanne Grinovero wrote:
 Thanks!
 What about we disable the failing tests if there is no immediate solution?

 On 6 January 2015 at 19:25, Galder Zamarreño gal...@redhat.com wrote:
 According to Adrian in https://github.com/infinispan/infinispan/pull/3114, 
 WIP...

 On 06 Jan 2015, at 13:40, Sanne Grinovero sa...@infinispan.org wrote:

 Hi all,
 these tests are failing me regularly since at least November, is
 someone looking at them?
 As usual, you might have noticed I stopped sending pull requests since
 the build fails here.

 thanks,
 Sanne

 Results :

 Failed tests:
   
 MultiHotRodServerIspnDirReplQueryTestMultiHotRodServerQueryTest.testAttributeQuery:124
 expected:1 but was:0
   
 MultiHotRodServerIspnDirReplQueryTestMultiHotRodServerQueryTest.testEmbeddedAttributeQuery:137
 expected:1 but was:0
   
 MultiHotRodServerIspnDirReplQueryTestMultiHotRodServerQueryTest.testProjections:167
 expected:1 but was:0

 Tests run: 865, Failures: 3, Errors: 0, Skipped: 0

 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Infinispan BOM . SUCCESS [  
 0.091 s]
 [INFO] Infinispan Common Parent ... SUCCESS [  
 1.019 s]
 [INFO] Infinispan Checkstyle Rules  SUCCESS [  
 2.012 s]
 [INFO] Infinispan Commons . SUCCESS [  
 5.641 s]
 [INFO] Infinispan Core  SUCCESS [06:59 
 min]
 [INFO] Infinispan Extended Statistics . SUCCESS [ 
 34.332 s]
 [INFO] Parent pom for server modules .. SUCCESS [  
 0.075 s]
 [INFO] Infinispan Server - Core Components  SUCCESS [ 
 12.236 s]
 [INFO] Infinispan Query DSL API ... SUCCESS [  
 0.735 s]
 [INFO] Infinispan Object Filtering API  SUCCESS [  
 1.610 s]
 [INFO] Parent pom for cachestore modules .. SUCCESS [  
 0.123 s]
 [INFO] Infinispan JDBC CacheStore . SUCCESS [ 
 19.649 s]
 [INFO] Parent pom for the Lucene integration modules .. SUCCESS [  
 0.068 s]
 [INFO] Infinispan Lucene Directory Implementation . SUCCESS [  
 9.066 s]
 [INFO] Infinispan Query API ... SUCCESS [ 
 45.772 s]
 [INFO] Infinispan Tools ... SUCCESS [  
 1.343 s]
 [INFO] Infinispan Remote Query Client . SUCCESS [  
 0.457 s]
 [INFO] Infinispan Remote Query Server . SUCCESS [  
 7.949 s]
 [INFO] Infinispan Tree API  SUCCESS [  
 7.558 s]
 [INFO] Infinispan JPA CacheStore .. SUCCESS [ 
 16.348 s]
 [INFO] Infinispan Hot Rod Server .. SUCCESS [01:14 
 min]
 [INFO] Infinispan Hot Rod Client .. FAILURE [ 
 58.719 s]
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Infinispan 7.0.0.Beta2 is available!

2014-09-16 Thread Adrian Nistor
Dear Infinispan community,

We are proud to announce the second beta release for Infinispan 7.0.0.

More info at
http://blog.infinispan.org/2014/09/infinispan-700beta2-is-out.html

Thanks to everyone for their involvement and contributions!

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Welcome to Gustavo

2014-05-15 Thread Adrian Nistor
Welcome Gustavo!

On 05/15/2014 04:29 PM, Sanne Grinovero wrote:
 Hi all,
 today we finally have Gustavo joining us as a full time engineer on 
 Infinispan.

 He worked with Tristan and myself in Italy before we came to Red Hat,
 and was already a Lucene expert back then. He then joined Red Hat as a
 consultant but that didn't last too long: he was too good and
 customers wanted him to travel an unreasonable amount.

 So he has been lost for a couple of years, but wisely spent them to
 deepen his skills in devops, more of Lucene but now in larger scale
 and distributed environments: a bit of JGroups, Infinispan and
 Hibernate Search and even some Scala, but also experience with
 MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this
 great blend of competences now available full time to improve the
 Search experience of Infinispan users.

 Welcome!

 He's gustavonalle on both IRC and GitHub.

 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] [!] Reorganization of dependencies release process

2014-05-14 Thread Adrian Nistor
+1 for moving Infinispan lucene directory out

But why move Query components out? And which ones did you have in mind?

On 05/14/2014 12:50 AM, Sanne Grinovero wrote:
 This is a reboot of the thread previously started on both the
 infinispan-dev and the hibernate-dev mailing list as Handling of
 mutual dependency with Infinispan [1].
 We discussed further during the Hibernate fortnightly meeting [2], and
 came to the conclusion that we need Infinispan to change how some
 repositories are organised and how the release is assembled.

 # The problem

 To restate the issue, as you might painfully remember, every time
 there is a need for a Lucene update or a Search update we need to sync
 up for a complex dance of releases in both projects to accommodate for
 a small-step iterative process to handle the circular dependency.
 This problem is not too bad today as since a year we're releasing the
 Lucene Directory in an unusual - and very unmaintainable - temporary
 solution to be compatible with two different major versions of Apache
 Lucene; namely what Infinispan Query needs and what Hibernate Search
 needs are different modules.
 But the party is over, and I want to finally drop support for Lucene 3
 and cleanup the unusual and unmaintainable build mess targeting a
 single Lucene version only.
 As soon as we converge to building a single version however - we're
 back to the complex problem we had when we supported a single version
 which is handling of a circular dependency - just that the problem has
 worsened lately the Lucene project has been more active and more
 inclined than what it used to be to break both internal and public
 APIs.

 In short, we have a circular dependency between Hibernate Search and
 Infinispan which we've been able to handle via hacks and some luck,
 but it imposes a serious threat to development flexibility, and the
 locked-in release process is not desirable either.

 # The solution

 we think in conclusion there's a single proper way out, and it also
 happens to provide some very interesting side effects in terms of
 maintenance overhead for everyone: Infinispan Core needs to release
 independently from the non-core modules.
 This would have the Lucene Directory depend on a released tag of
 infinispan-core, and be able to be released independently.
 Minor situations with benefit:
   - we often don't make any change in the Lucene Directory, still we
 need to release it.
   - when I actually need a release of it, I'm currently begging for a
 quick release of Infinispan: very costly
 The Big Ones:
   - we can manage the Lucene Directory to provide support for different
 versions of Lucene without necessarily breaking other modules
   - we can release quickly what's needed to move Search ahead in terms
 of Lucene versions without needing to make the Infinispan Query module
 compatible at the same time (in case you haven't followed this area:
 this seems to be my main activity rather than making valuable stuff).

 The goal is of course to linearise the dependencies; it seems to also
 simplify some of our tasks which is a welcome side-effect. I expect it
 also to make the project less scary for new contributors.

 # How does it impact users

 ## Maven users
 modules will continue to be modules.. I guess nobody will notice,
 other than we might have a different versioning scheme, but we help
 people out via the Infinispan BOM.

 ## Distribution users
 There should be no difference, other than (as well) some jars might
 not be aligned in terms of version. But that's probably even less of a
 problem, as I expect distribution users to just put what they get on
 their classpath.

 # How it impacts us

 1) I'll move the Lucene Directory project to an different repository;
 same for the Query related components.
 I think you should/could consider the same for other components, based
 on ad-hoc considerations of the trade offs, but I'd expect ultimately
 to see a more frequent and core only release.

 2) We'll have different kinds of releases: the core only and the
 full releases.
 I think we'll also see components being released independently, but
 these are either Maven-only or meant for preparation of other
 components, or preparation for a full release.

 3) Tests (!)
 Such a move should in no way relax the regression-safety of
 infinispan-core: we need to still consider it unacceptable for a core
 change to break one of the modules moving out of the main tree.
 Personally I think I've pushed many tests about problems found in the
 query modules as unit tests in core, so that should be relatively
 safe, but it also happened that someone would tune these.
 I realise it's not practical to expect people to run tests of
 downstream modules, so we'll have to automate most of these tasks in
 CI.
 Careful on perception: if today there are three levels of defence
 against a regression (the author, the reviewer and CI all running the
 suite for each change), in such an organisation you have only 

Re: [infinispan-dev] Infinispan Test language level to Java 8?

2014-04-30 Thread Adrian Nistor
  Another potential problem, as rightly pointed out by Will on IRC, is 
that it would also cause issues for anyone trying to run our testsuite 
with JDK7 or earlier, if anyone is doing such a thing.

Galder, we may be doing such a thing :) The test suite is meant to 
verify correctness of our libraries when executed against a concrete set 
of external dependencies, with clearly specified supported versions or 
version intervals - the jdk being the most important of them.

Since we'll no longer be able to run on jdk 7 we can no longer support 
jdk. Even if animal-sniffer cheerfully reports we've not broken binary 
compat, that still does not mean much when it comes to jdk version 
specific issues, or jdk maker specific issue (remember the IBM jdk 
oddities).

Mavenwise, I think it is not possible to have a different compiler 
language level for module sources vs. test sources and Eclipse and 
Intellij also cannot cope with two source levels per module, so this 
would introduce some unnecessary development discomfort.  I would vote 
no for this.

Adrian

On 04/30/2014 02:55 PM, Galder Zamarreño wrote:
 On 30 Apr 2014, at 13:36, Galder Zamarreño gal...@redhat.com wrote:

 Hi all,

 Just thinking out loud: what about we start using JDK8+ for all the test 
 code in Infinispan?

 The production code would still have language level 6/7 (whatever is 
 required…).

 This way we start getting ourselves familiar with JDK8 in a safe environment 
 and we reduce some of the boiler plate code currently existing in the tests.

 This would only problematic for anyone consuming our test jars. They’d need 
 move up to JDK8+ along with us.
 Another potential problem, as rightly pointed out by Will on IRC, is that it 
 would also cause issues for anyone trying to run our testsuite with JDK7 or 
 earlier, if anyone is doing such a thing.

 Thoughts?

 p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which 
 provides a great overview on what’s new in JDK8 along with small code 
 samples.
 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Infinispan Test language level to Java 8?

2014-04-30 Thread Adrian Nistor
I don't see those concerns as lightly as you put them.

Animal-sniffer is just a quick fail-fast check for binary compatibility 
of type hierarchy and method signatures, but that's not everything. The 
only sure way to test against a particular jdk version is to actually 
run the test suite with it. Seeing is believing. That's why we have 
separate CI jobs running against older supported jdks.  I fail to see 
how this works when our unit tests are written using jdk 8 features.

On 04/30/2014 05:31 PM, Sanne Grinovero wrote:
 Valid concerns, but I think we should split those in two very
 different categories:
   1- we provide testing utilities which are quite useful to other people too
   2- we run unit tests on our own code to prevent regressions

 If we split the utilities into a properly delivered package - built
 with Java7, having a very own Maven identity and maybe even a user
 guide - that would be even more useful to consumers. For example I use
 some of the utilities in both Hibernate Search and Hibernate OGM,
 dependending on the testing classifier of infinispan-core. I'd prefer
 to depend on a proper module with a somehow stable API, and this
 would be a great improvement for our users who start playing with
 Infinispan.. I often refer to our testsuite to explain how to setup
 things.

 For the second use case - our own test execution - I see great
 advantages from using Java8. First of, to verify that the API's we're
 developing today will make sense in a lambda enabled world: we might
 not baseline on it today but it's very hard to do forward-compatible
 thinking without actually experimenting with the API in TDD before
 this is cast in stone. Remember TDD is a design metodology, not a QA
 approach.

 But I agree with Adrian on not wanting to fully trust animal-sniffer
 with this task, nor I like the flexibility we have in IDEs for a
 single module being mixed.
 For the record Hibernate has been since long keeping the test
 infrastructure in a different module; we could explore an alternative
 code organization. While it's important to have some core tests
 closely coupled with the module it's meant to test, I don't see why we
 couldn't have additional tests in a different module?

 +1 to have at least one module using (requiring) Java8. Yes,
 contributors will need to have it around.. I don't see a problem, any
 potentially good contributor should have it around by now.

 Sanne



 On 30 April 2014 13:12, Adrian Nistor anis...@redhat.com wrote:
Another potential problem, as rightly pointed out by Will on IRC, is
 that it would also cause issues for anyone trying to run our testsuite
 with JDK7 or earlier, if anyone is doing such a thing.

 Galder, we may be doing such a thing :) The test suite is meant to
 verify correctness of our libraries when executed against a concrete set
 of external dependencies, with clearly specified supported versions or
 version intervals - the jdk being the most important of them.

 Since we'll no longer be able to run on jdk 7 we can no longer support
 jdk. Even if animal-sniffer cheerfully reports we've not broken binary
 compat, that still does not mean much when it comes to jdk version
 specific issues, or jdk maker specific issue (remember the IBM jdk
 oddities).

 Mavenwise, I think it is not possible to have a different compiler
 language level for module sources vs. test sources and Eclipse and
 Intellij also cannot cope with two source levels per module, so this
 would introduce some unnecessary development discomfort.  I would vote
 no for this.

 Adrian

 On 04/30/2014 02:55 PM, Galder Zamarreño wrote:
 On 30 Apr 2014, at 13:36, Galder Zamarreño gal...@redhat.com wrote:

 Hi all,

 Just thinking out loud: what about we start using JDK8+ for all the test 
 code in Infinispan?

 The production code would still have language level 6/7 (whatever is 
 required…).

 This way we start getting ourselves familiar with JDK8 in a safe 
 environment and we reduce some of the boiler plate code currently existing 
 in the tests.

 This would only problematic for anyone consuming our test jars. They’d 
 need move up to JDK8+ along with us.
 Another potential problem, as rightly pointed out by Will on IRC, is that 
 it would also cause issues for anyone trying to run our testsuite with JDK7 
 or earlier, if anyone is doing such a thing.

 Thoughts?

 p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which 
 provides a great overview on what’s new in JDK8 along with small code 
 samples.
 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Cerealization protocols

2014-03-20 Thread Adrian Nistor
Think JSON, except binary. Or think Protocol Buffers 
http://protobuf.googlecode.com, except faster. In fact, in benchmarks, 
Cap’n Proto is INFINITY TIMES faster than Protocol Buffers. L.O.L.


On 03/20/2014 12:56 PM, Sanne Grinovero wrote:

I just heard about Cerealization. Looks like tasty:

http://kentonv.github.io/capnproto/
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Infinispan Query API module

2014-03-17 Thread Adrian Nistor

Hi,

hibernate-hql-parser and hibernate-hql-lucene do not have a Final 
release yet, so at this moment it is not possible to avoid the alpha 
dependency.


Cheers

On 03/17/2014 01:57 AM, Bilgin Ibryam wrote:

Hi all,

I was working on extending camel-infinispan component with remote 
query capability and just realized that 
org.infinispan/infinispan-query/6.0.1.Final depends on 
hibernate-hql-parser and hibernate-hql-lucene which are still in Alpha.


Am I missing something or is there a way to no depend on alpha version 
of these artifacts from a final version artifact?


Thanks,

--
Bilgin Ibryam

Apache Camel  Apache OFBiz committer
Blog: ofbizian.com http://ofbizian.com
Twitter: @bibryam https://twitter.com/bibryam

Author of Instant Apache Camel Message Routing
http://www.amazon.com/dp/1783283475


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Never push with --force

2014-03-12 Thread Adrian Nistor
Sanne, is it possible that you forgot to push his changes upstream when 
closing his PR?

This is what the github news feed of infinispan repo tells me:
1. Dan opened the PR #2433 yesterday (about 20 hrs ago)
2. you closed his PR after a few hours (15 hrs ago) and commented 
'Merged', but I cannot see a 'pushed to master' entry in newsfeed around 
this time
3. You pushed upstream this morning (7 hrs ago)

I would never expect --force being used on the upstream repo. Whoever 
does it should ask permission first and have a good reason, not just 
warn the list about it.

On 03/12/2014 11:27 AM, Sanne Grinovero wrote:
 Yesterday I pushed a fix from Dan upstream, and this morning the fix
 wasn't there anymore. Some unrelated fix was merged in the meantime.

 I only realized this because I was updating my personal origin and git
 wouldn't allow me to push the non-fast-forward branch, so in a sense I
 could detect it because of how our workflow works (good).

 I have no idea of how it happened, but I guess it won't hurt to remind
 that we should never push with --force, at least not without warning
 the whole list.

 I now cherry-picked and fixed master by re-pushing the missing patch,
 so nothing bad happening :-)

 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Query.getResultSize() to be available on the simplified DSL?

2014-03-11 Thread Adrian Nistor
I think that technical trickiness is also required for implementing 
pagination, and would probably also suffer from same 
limitations/approximations.
Should we remove pagination too from the api?

On 03/11/2014 10:47 AM, Sanne Grinovero wrote:
 On 10 March 2014 17:09, Mircea Markus mmar...@redhat.com wrote:
 On Mar 10, 2014, at 15:12, Sanne Grinovero sa...@infinispan.org wrote:

 Ok you make some good points, and I've no doubts of it being useful.

 My only concern is that this could slow us down significantly in
 providing other features which might be even more useful or pressing.
 You have to pick your battles and be wise on where to spend energy
 first.

 Considering that it's easier to add methods than to remove them, what
 would you think of marking this as experimental for now?
 I'd prefer to see the non-indexed query engine delivered first; this
 sounds like being a stone on the critical path so it might be wise to
 have the option to drop the requirement from a first implementation.
 Definitely you're right that we should then implement some COUNT
 strategy, I'm just not comfortable in committing on this one yet.
 I can imagine a lot of users emulating this by simply iterating over the 
 entries in the result set. Even if we do just that and document it as slow, 
 I think it's still worth exposing this somewhere.
 I'm not questioning it to be useful. But the implementation is tricky,
 for example simply iterating would require a global (and distributed)
 lock to be accurate.
 Otherwise we can only document it as an approximation, and worst is we
 can't even estimate an error margin: under steady load it would
 probably be a reasonable estimation, but there are corner cases in
 which you can get off by several orders of magnitude.
 Among others, your assumptions would need to include:
   - no nodes failing (no state transfers happening)
   - no write spikes (probably one of the best reasons to deploy
 infinispan is to be able to absorb spikes)
   - no wild churning across datastore/cachestores
   - expirations happening in a homogeneus pattern (this entirely
 depends on the use case)

 Also there is no way to make this work in the context of a
 transaction, as it would pretty much violate any promise of repeatable
 read properties.

 So the question is, if a user would still consider it useful after
 (hopefully) understanding all the attached strings.

 In other contexts we discussed the need for Infinispan to provide
 something like a snapshot capability based on TOA. If we had that, we
 could implement a count operation on top of it.

 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Query.getResultSize() to be available on the simplified DSL?

2014-03-11 Thread Adrian Nistor
To be more precise, by pagination I'm referring to methods 
QueryBuilder.startOffset/maxResults. Since our remote protocol are 
stateless we would need to re-execute the query to fetch the next page. 
That leads to the same problem of approximation.

On 03/11/2014 03:11 PM, Adrian Nistor wrote:
 I think that technical trickiness is also required for implementing
 pagination, and would probably also suffer from same
 limitations/approximations.
 Should we remove pagination too from the api?

 On 03/11/2014 10:47 AM, Sanne Grinovero wrote:
 On 10 March 2014 17:09, Mircea Markus mmar...@redhat.com wrote:
 On Mar 10, 2014, at 15:12, Sanne Grinovero sa...@infinispan.org wrote:

 Ok you make some good points, and I've no doubts of it being useful.

 My only concern is that this could slow us down significantly in
 providing other features which might be even more useful or pressing.
 You have to pick your battles and be wise on where to spend energy
 first.

 Considering that it's easier to add methods than to remove them, what
 would you think of marking this as experimental for now?
 I'd prefer to see the non-indexed query engine delivered first; this
 sounds like being a stone on the critical path so it might be wise to
 have the option to drop the requirement from a first implementation.
 Definitely you're right that we should then implement some COUNT
 strategy, I'm just not comfortable in committing on this one yet.
 I can imagine a lot of users emulating this by simply iterating over the 
 entries in the result set. Even if we do just that and document it as slow, 
 I think it's still worth exposing this somewhere.
 I'm not questioning it to be useful. But the implementation is tricky,
 for example simply iterating would require a global (and distributed)
 lock to be accurate.
 Otherwise we can only document it as an approximation, and worst is we
 can't even estimate an error margin: under steady load it would
 probably be a reasonable estimation, but there are corner cases in
 which you can get off by several orders of magnitude.
 Among others, your assumptions would need to include:
- no nodes failing (no state transfers happening)
- no write spikes (probably one of the best reasons to deploy
 infinispan is to be able to absorb spikes)
- no wild churning across datastore/cachestores
- expirations happening in a homogeneus pattern (this entirely
 depends on the use case)

 Also there is no way to make this work in the context of a
 transaction, as it would pretty much violate any promise of repeatable
 read properties.

 So the question is, if a user would still consider it useful after
 (hopefully) understanding all the attached strings.

 In other contexts we discussed the need for Infinispan to provide
 something like a snapshot capability based on TOA. If we had that, we
 could implement a count operation on top of it.

 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Design change in Infinispan Query

2014-02-26 Thread Adrian Nistor
On 02/26/2014 04:20 PM, Mircea Markus wrote:
 On Feb 26, 2014, at 2:13 PM, Dan Berindei dan.berin...@gmail.com wrote:



 On Wed, Feb 26, 2014 at 3:12 PM, Mircea Markus mmar...@redhat.com wrote:

 On Feb 25, 2014, at 5:08 PM, Sanne Grinovero sa...@infinispan.org wrote:

 There also is the opposite problem to be considered, as Emmanuel
 suggested on 11/04/2012:
 you can't forbid the user to store the same object (same type and same
 id) in two different caches, where each Cache might be using different
 indexing options.

 If the search service is a global concept, and you run a query which
 matches object X, we'll return it to the user but he won't be able to
 figure out from which cache it's being sourced: is that ok?
 Can't the user figure that out based on the way the query is built?
 I mean the problem is similar with the databases: if address is both a table 
 and an column in the USER table, then it's the query (select) that 
 determines where from the address is returned.

 You mean the user should specify the cache name(s) when building the query?
 yes
Let's say multiple caches are specified when building the query. How can 
I tell (with current result api) where does the matching entity come 
from? I still think we should extend the result api in order to provide: 
1. the key of the entity, 2. the name of the originating cache.  The old 
result api that just gives you an IteratorObject over the matches 
should continue to exist because it's more efficient for the cases when 
the user does not need #1 and #2.


 With a database you have to go a bit out of your way to select from more 
 than one table at a time, normally you have just one primary table that you 
 select from and the others are just to help you filter and transform that 
 table. You also have to add some information about the source table yourself 
 if you need it, otherwise the DB won't tell you what table the results are 
 coming from:

 SELECT table1 as source, id FROM table1
 UNION ALL
 SELECT table2 as source, id FROM table2

 Adrian tells our current query API doesn't allow us to do projections with 
 synthetic columns. On the other hand, we need to extend the current API to 
 give us the entry key anyway, so it would be easy to extend it to give us 
 the name of the cache as well.


 Ultimately this implies a query might return the same object X in
 multiple positions in the result list of the query; for example it
 might be the top result according to some criteria but also be the 5th
 result because of how it was indexed in a different case: maybe
 someone will find good use for this capability but I see it
 primarily as a source of confusion.
 Curious if this cannot be source of data can/cannot be specified within the 
 query.

 Right, the user should be able to scope a search to a single cache, or maybe 
 to multiple caches, even if there is only one global index.

 But I think the same object can already be inserted twice in the same cache, 
 only with a different key, so returning duplicates from a query is something 
 the user already has to cope with.


 Finally, if we move the search service as a global component, there
 might be an impact in how we explain security: an ACL filter applied
 on one cache - or the index metadata produced by that cache - might
 not be applied in the same way by an entity being matched through a
 second cache.
 Not least a user's permission to access one cache (or not) will affect
 his results in a rather complex way.
 I'll let Tristan comment more on this, but is this really different from an 
 SQL database where you grant access on individual tables and run a query 
 involving multiple of them?

 The difference would be that in a DB each table will have its own index(es), 
 so they only have to check the permissions once and not for every row.

 OTOH, if we plan to support key-level permissions, that would require 
 checking the permissions on each search result anyway, so this wouldn't cost 
 us anything.
   

 I'm wondering if we need to prevent such situations.

 Sanne

 On 25 February 2014 16:24, Mircea Markus mmar...@redhat.com wrote:
 On Feb 25, 2014, at 3:46 PM, Adrian Nistor anis...@gmail.com wrote:

 They can do what they please. Either put multiple types in one basket or 
 put them in separate caches (one type per cache). But allowing / 
 recommending is one thing, mandating it is a different story.

 There's no reason to forbid _any_ of these scenarios / mandate one over 
 the other! There was previously in this thread some suggestion of 
 mandating the one type per cache usage. -1 for it
 Agreed. I actually don't see how we can enforce people that declare 
 CacheObject,Object not put whatever they want in it. Also makes total 
 sense for smaller caches as it is easy to set up etc.
 The debate in this email, the way I understood it, was: are/should people 
 using multiple caches for storing data? If yes we should consider querying 
 functionality spreading over multiple

Re: [infinispan-dev] Design change in Infinispan Query

2014-02-18 Thread Adrian Nistor
Well, OGM and Infinispan are different species :) So, Infinispan being 
what it is today - a non-homogenous, schema-less KV store, without 
support for entity associations (except embedding) - which simplifies 
the whole thing a lot, should we or should we not provide transparent 
cross-cacheManager search capabilities, in this exact context? Vote?


There were some points raised previously like /if you search for more 
than one cache transparently, then you probably need to CRUD for more 
than one cache transparently as well/. In the SQL world you would also 
probably CRUD against a table or set of tables and then query against a 
view - a bit like what we're doing here. I don't see any problem with 
this in principle. There is however something currently missing in the 
query result set API - it currently does not provide you the keys of the 
matching entities. People work around this by storing the key in the 
entity.  Now with the addition of the cross-cacheManager search we'll 
probably need to fix the result api  and also provide a reference to the 
cache (or just the name?) where the entity is stored.


The (enforced) one entity type per cache rule is not conceptually or 
technically required for implementing this, so I won't start raving 
against it :)  Sane users should apply it however.



On 02/18/2014 12:13 AM, Emmanuel Bernard wrote:

By the way, Mircea, Sanne and I had quite a long discussion about this one and 
the idea of one cache per entity. It turns out that the right (as in easy) 
solution does involve a higher level programming model like OGM provides. You 
can simulate it yourself using the Infinispan APIs but it is just cumbersome.


On 17 févr. 2014, at 18:51, Emmanuel Bernard emman...@hibernate.org wrote:


On Mon 2014-02-17 18:43, Galder Zamarreño wrote:


On 05 Feb 2014, at 17:30, Emmanuel Bernard emman...@hibernate.org wrote:


On Wed 2014-02-05 15:53, Mircea Markus wrote:


On Feb 3, 2014, at 9:32 AM, Emmanuel Bernard emman...@hibernate.org wrote:

Sure searching for any cache is useful. What I was advocating is that if you 
search for more than one cache transparently, then you probably need to CRUD 
for more than one cache transparently as well. And this is not being discussed.

Not sure what you mean by CRUD over multiple caches? ATM one can run a TX over 
multiple caches, but I think there's something else you have in mind :-)


//some unified query giving me entries pointing by fk copy to bar and
//buz objects. So I need to manually load these references.

//happy emmanuel
Cache unifiedCache = cacheManager.getMotherOfAllCaches();
Bar bar = unifiedCache.get(foo);
Buz buz = unifiedCache.get(baz);

//not so happy emmanuel
Cache fooCache = cacheManager.getCache(foo);
Bar bar = fooCache.get(foo);
Cache bazCache = cacheManager.getCache(baz);
Buz buz = bazCache.put(baz);

Would something like what Paul suggests in 
https://issues.jboss.org/browse/ISPN-3640 help you better? IOW, have a single 
cache, and then have a filtered view for Bar or Buz types? Not sure I 
understand the differences in your code changes in terms of what makes you 
happy vs not.

Not really.
What makes me unhappy is to have to keep in my app all the
references to these specific cache store instances. The filtering
approach only moves the problem.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Remote Query improvements

2014-02-10 Thread Adrian Nistor
The idea of auto-generating protobuf schemas based on the marshaller 
code was briefly mentioned last time we met in Palma. I would not 
qualify it as impossible to implement, but it would certainly be hacky 
and leads to more trouble than it's worth.

A lot of info is missing from the marshaller code (API calls) precisely 
because it is not normally needed, being provided by the schema already. 
Now trying to go backwards means we'll have to 'invent' that metadata 
using some common sense (examples: which field is required vs optional, 
which field is indexable, indexing options, etc). Too many options. I 
bet the notion of 'common sense' would quickly need to be configured 
somehow, for uncommon use cases :). But that's why we have protobuf 
schemas for. Plus, to run a marshaller for inferring the schema you'll 
first need a prototypical instance of your entity. Where from? So no, 
-1, now I have serious concerns about this, even though I initially 
nodded in approval.

And that would work only for Java anyway, because the marshaller and the 
schema-infering-process needs to run on the server side.


On 02/10/2014 07:34 PM, Mircea Markus wrote:
 On Feb 10, 2014, at 4:54 PM, Tristan Tarrant ttarr...@redhat.com wrote:

 - since remote query is already imbued with JPA in some form, an
 interesting project would be to implement a JPA annotation processor
 which can produce a set of ProtoBuf schemas from JPA-annotated classes.
 - on top of the above, a ProtoBuf marshaller/unmarshaller which can use
 the JPA entities directly.
 I think it would be even more useful to infer the protbuf schema from the 
 protostream marshaller: the marshaller is required in order to serialize 
 object into the proto format and has the advantage that it works even without 
 Java.

 Cheers,

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Remote Query improvements

2014-02-10 Thread Adrian Nistor
Most of this is in jira already, so it would be good to comment there.

#1 = ISPN-3747  ISPN-3926
#2 = ISPN-3480 (wording is not the same, but it's the same issue)
#3 = ISPN-3718
#4 = 

On 02/10/2014 06:54 PM, Tristan Tarrant wrote:
 Hi everybody,

 last week I developed a simple application using Remote Query, and ran
 into a few issues. Some of them are just technical hurdles, while others
 have to do with the complexity of the developer experience. Here they
 are for open discussion:

 - the schemas registry should be persistent. Alternatively being able to
 either specify the ProtoBuf schema from the indexing / configuration
 in the server subsystem or use server's deployment processor to deploy
 schemas.
 - the server should store the single protobuf source schemas to allow
 for easy inspection/update of each using our management tools. The
 server itself should then compile the protobuf schemas into the binary
 representation when any of the source schemas changes. This would
 require a Java implementation of the ProtoBuf schema compiler, which
 wouldn't probably be too hard to do with Antlr.
 - we need to be able to annotate single protobuf fields for indexing
 (probably by using specially-formatted comments, a la doclets) to avoid
 indexing all of the fields
 - since remote query is already imbued with JPA in some form, an
 interesting project would be to implement a JPA annotation processor
 which can produce a set of ProtoBuf schemas from JPA-annotated classes.
 - on top of the above, a ProtoBuf marshaller/unmarshaller which can use
 the JPA entities directly.

 Tristan

 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] reusing infinispan's marshalling

2014-01-31 Thread Adrian Nistor
Thanks Vladimir! It's a really fun and interesting discussion going on 
there :)


On 01/31/2014 06:29 PM, Vladimir Blagojevic wrote:
 Not 100% related to what you are asking about but have a look at this
 post and the discussion that erupted:

 http://gridgain.blogspot.ca/2012/12/java-serialization-good-fast-and-faster.html

 Vladimir
 On 1/30/2014, 7:13 AM, Adrian Nistor wrote:
 Hi list!

 I've been pondering about re-using the marshalling machinery of
 Infinispan in another project, specifically in ProtoStream, where I'm
 planning to add it as a test scoped dependency so I can create a
 benchmark  to compare marshalling performace. I'm basically interested
 in comparing ProtoStream and Infinispan's JBoss Marshalling based
 mechanism. Comparing against plain JBMAR, without using the
 ExternalizerTable and Externalizers introduced by Infinispan is not
 going to get me accurate results.

 But how? I see the marshaling is spread across infinispan-commons and
 infinispan-core modules.

 Thanks!
 Adrian
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Kyro performance (Was: reusing infinispan's marshalling)

2014-01-31 Thread Adrian Nistor
Indeed, I'm not looking for a JBMAR replacement, just trying to create a 
comparative benchmark between it and Protobuf/ProtoStream.

I'm trying to make an apple to apple comparison by marshalling the same 
domain model with both libraries. My (incipient) test currently 
indicates a 2x better write perf and 3x better read perf with 
ProtoStream but I'm sure it is flawed because I do not have custom JBMAR 
externalizers for my entities so I suspect it is basically resorting to 
plain old serialization. Was hoping to reuse this part from infinispan 
but is seems to be very tied to core.

Need to dig deeper into that awesome jbmar user guide :)

On 01/31/2014 06:59 PM, Sanne Grinovero wrote:
 Changing the subject, as Adrian will need a reply to his (more
 important) question.

 I don't think we should go shopping for different marshaller
 implementations, especially given other priorities.

 I've been keeping an eye on Kryo since a while and it looks very good
 indeed, but JBMarshaller is serving us pretty well and I'm loving its
 reliability.

 If we need more speed in this area, I'd rather see us perform some
 very accurate benchmark development and try to understand why Kyro is
 faster than JBM (if it really is), and potentially improve JBM.
 For example as I've already suggested, it's using an internal
 identityMap to detect graphs, and often we might not need that, or
 also it would be nice to refactor it to write to an existing byte
 stream rather than having it allocate internal buffers, and finally we
 might want a stateless edition so to get rid of need for pooling of
 JBMar instances.

   -- Sanne



 On 31 January 2014 16:29, Vladimir Blagojevic vblag...@redhat.com wrote:
 Not 100% related to what you are asking about but have a look at this
 post and the discussion that erupted:

 http://gridgain.blogspot.ca/2012/12/java-serialization-good-fast-and-faster.html

 Vladimir
 On 1/30/2014, 7:13 AM, Adrian Nistor wrote:
 Hi list!

 I've been pondering about re-using the marshalling machinery of
 Infinispan in another project, specifically in ProtoStream, where I'm
 planning to add it as a test scoped dependency so I can create a
 benchmark  to compare marshalling performace. I'm basically interested
 in comparing ProtoStream and Infinispan's JBoss Marshalling based
 mechanism. Comparing against plain JBMAR, without using the
 ExternalizerTable and Externalizers introduced by Infinispan is not
 going to get me accurate results.

 But how? I see the marshaling is spread across infinispan-commons and
 infinispan-core modules.

 Thanks!
 Adrian
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] reusing infinispan's marshalling

2014-01-30 Thread Adrian Nistor
Hi list!

I've been pondering about re-using the marshalling machinery of 
Infinispan in another project, specifically in ProtoStream, where I'm 
planning to add it as a test scoped dependency so I can create a 
benchmark  to compare marshalling performace. I'm basically interested 
in comparing ProtoStream and Infinispan's JBoss Marshalling based 
mechanism. Comparing against plain JBMAR, without using the 
ExternalizerTable and Externalizers introduced by Infinispan is not 
going to get me accurate results.

But how? I see the marshaling is spread across infinispan-commons and 
infinispan-core modules.

Thanks!
Adrian
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Remote queries over Hot Rod quick start guide

2014-01-13 Thread Adrian Nistor
Just in case you missed the tweet in December, I've posted this on the 
Infinispan blog too: 
http://blog.infinispan.org/2014/01/a-new-quick-start-guide-for-remote.html


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] manual eviction and indexing

2013-12-11 Thread Adrian Nistor
I Agree. If  indexing + (automatic) eviction + no cachestore is 
currently allowed, then we should add a jira to add this config validation.

But what about manual eviction? Would it make sense to handle it the way 
I did?

On 12/11/2013 02:32 PM, Sanne Grinovero wrote:
 Hi Adrian,
 +1 good catch.

 but what's a realistic use case for {indexing + eviction + no cachestore} ?
 I guess some use cases might exist but I don't think it's critical,
 would you agree?

 and what about automatic eviction?

 I think the guiding principle should be that if an entry can be
 retrieved by key it should be searchable, and vice-versa, if I can
 find it by running a query I should be able to load the result.
 So expiry and other forms of eviction should also be considered, but
 if there is no practical use case we can consider making this an
 illegal configuration or simply log a warning about the particular
 configuration.

 Sanne

 - Original Message -
 Hi Sanne,

 I found that manual eviction does not update the index. I think manual
 eviction should behave like a remove, if there are no cache stores
 configured.

 Here's a test and a 'fix' :)
 https://github.com/anistor/infinispan/tree/t_manual_evict_and_indexing

 Let's discuss this when you have time.

 There is also the more complex situation of in-DataContainer eviction ...


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Infinispan 6.0.0.Final is out!

2013-11-20 Thread Adrian Nistor

Thanks for pointing that out. I've fixed the url in the blog post now.

On 11/20/2013 02:56 PM, Dan Berindei wrote:
Yes, it was implemented in Beta1: 
http://blog.infinispan.org/2013/09/heterogenous-clusters-with-infinispan.html 



Cheers
Dan


On Wed, Nov 20, 2013 at 11:37 AM, Radim Vansa rva...@redhat.com 
mailto:rva...@redhat.com wrote:


Hi, the heterogenous clusters link does not work. I also miss
any related JIRA in release notes - is it really implemented?

Radim


On 11/19/2013 09:08 PM, Adrian Nistor wrote:

Dear Infinispan community,

We're pleased to announce the final release of Infinispan 6.0
Infinium. Asannounced

http://infinispan.blogspot.co.uk/2013/05/infinispan-to-adopt-apache-software.html,
this is the first Infinispan stable version to be released under
the terms ofApache License v2.0
http://www.apache.org/licenses/LICENSE-2.0.html.

This release brings some highly demanded features besides many
stability enhancements and bug fixes:

 *
Support for remote query

http://blog.infinispan.org/2013/09/embedded-and-remote-query-in-infinispan.html.
It is now possible for the HotRod clients to query an
Infinispan grid using a new expressive query DSL. This
querying functionality is built on top of Apache Lucene and
Google Protobuf and lays the foundation for storing
information and querying an Infinispan server in a language
neutral manner. The Java HotRod client has already been
enhanced to support this, the soon-to-be announced C++ HotRod
client will also contain this functionality (initially for
write/read, then full blown querying).
 *
C++ HotRod client.  Allows C++ applications to read and write
information from an Infinispan server. This is a fully
fledged HotRod client that is topology (level 2) and
consistent hash aware (level 3) and will be released in the
following days. Some features (such as Remote Query and SSL
support) will be developed during the next iteration so that
it maintains feature parity with its Java counterpart.
 *
Better persistence integration. We've revisited the entire
cache loader API and we're quite pleased with the result: the
new Persistence API

http://blog.infinispan.org/2013/09/new-persistence-api-in-infinispan.htmlbrought
by Infinispan 6.0 supports parallel iteration of the stored
entries, reduces the overall serialization overhead and also
is aligned with the JSR-107
http://jcp.org/en/jsr/detail?id=107specification, which
makes implementations more portable.

 *
A more efficient FileCacheStore implementation

http://blog.infinispan.org/2013/07/faster-file-cache-store-no-extra.html.
This file store is built with efficiency in mind: it
outperforms the existing file store with up to 2 levels of
magnitude. This comes at a cost though, as keys need to be
kept  in memory. Thanks to Karsten Blees
https://github.com/kbleesfor contributing this!
 *
Support for heterogeneous clusters

http://blog.infinispan.org/2013/09/heterogenous-clusters-with-infinispan.htm.
Up to this release, every member of the cluster owns an equal
share of the cluster's data. This doesn't work well if one
machine is more powerful than the other cluster participants.
This functionality allows specifying the amount of data,
compared with the average, held by a particular machine.
 *
A new set of usage and performance statistics
https://issues.jboss.org/browse/ISPN-2861developed within
the scope of the CloudTM
projecthttps://issues.jboss.org/browse/ISPN-3234.
 *
JCache https://issues.jboss.org/browse/ISPN-3234(JSR-107)
implementation upgrade. First released in Infinispan 5.3.0,
the standard caching support is now upgraded to version
1.0.0-PFD.



For a complete list of features included in this release please
refer to the release notes

https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12310799version=12322480.
The user documentation for this release has been revamped and
migrated to the new website
http://infinispan.org/documentation/- we think it looks much
better and hope you'll like it too!
This release has spread over a period of 5 months: a sustained
effort from the core development team, QE team and our growing
community - a BIG thanks to everybody involved! Please visit our
downloads http://infinispan.org/download/section to find the
latest release. Also if you have any questions please check our
forums http://infinispan.org/community/, our mailing lists
https://lists.jboss.org/mailman/listinfo/infinispan-devor ping
us directly on IRC.

Cheers

Re: [infinispan-dev] Infinispan 6.0.0.Final is out!

2013-11-20 Thread Adrian Nistor

Thanks Manik! Website rendering and twitter are fixed now.

On 11/20/2013 12:49 AM, Manik Surtani wrote:

Congrats.

Looks like a few things went missing though:

* Nothing on Twitter?
* C++ docs fail - http://infinispan.org/docs/hotrod-clients/cpp/
* The fact that there is a stable but no unstable causes some 
weirdness in the website rendering.  See the Java client - 
http://infinispan.org/hotrod-clients/
* Some cache stores still undocumented - e.g., REST 
http://infinispan.org/docs/cachestores/rest/


- M


On 19 November 2013 12:08, Adrian Nistor anis...@redhat.com 
mailto:anis...@redhat.com wrote:


Dear Infinispan community,

We're pleased to announce the final release of Infinispan 6.0
Infinium. Asannounced

http://infinispan.blogspot.co.uk/2013/05/infinispan-to-adopt-apache-software.html,
this is the first Infinispan stable version to be released under
the terms ofApache License v2.0
http://www.apache.org/licenses/LICENSE-2.0.html.

This release brings some highly demanded features besides many
stability enhancements and bug fixes:

 *
Support for remote query

http://blog.infinispan.org/2013/09/embedded-and-remote-query-in-infinispan.html.
It is now possible for the HotRod clients to query an
Infinispan grid using a new expressive query DSL. This
querying functionality is built on top of Apache Lucene and
Google Protobuf and lays the foundation for storing
information and querying an Infinispan server in a language
neutral manner. The Java HotRod client has already been
enhanced to support this, the soon-to-be announced C++ HotRod
client will also contain this functionality (initially for
write/read, then full blown querying).
 *
C++ HotRod client.  Allows C++ applications to read and write
information from an Infinispan server. This is a fully fledged
HotRod client that is topology (level 2) and consistent hash
aware (level 3) and will be released in the following days.
Some features (such as Remote Query and SSL support) will be
developed during the next iteration so that it maintains
feature parity with its Java counterpart.
 *
Better persistence integration. We've revisited the entire
cache loader API and we're quite pleased with the result: the
new Persistence API

http://blog.infinispan.org/2013/09/new-persistence-api-in-infinispan.htmlbrought
by Infinispan 6.0 supports parallel iteration of the stored
entries, reduces the overall serialization overhead and also
is aligned with the JSR-107
http://jcp.org/en/jsr/detail?id=107specification, which
makes implementations more portable.

 *
A more efficient FileCacheStore implementation

http://blog.infinispan.org/2013/07/faster-file-cache-store-no-extra.html.
This file store is built with efficiency in mind: it
outperforms the existing file store with up to 2 levels of
magnitude. This comes at a cost though, as keys need to be
kept  in memory. Thanks to Karsten Blees
https://github.com/kbleesfor contributing this!
 *
Support for heterogeneous clusters

http://blog.infinispan.org/2013/09/heterogenous-clusters-with-infinispan.htm.
Up to this release, every member of the cluster owns an equal
share of the cluster's data. This doesn't work well if one
machine is more powerful than the other cluster participants.
This functionality allows specifying the amount of data,
compared with the average, held by a particular machine.
 *
A new set of usage and performance statistics
https://issues.jboss.org/browse/ISPN-2861developed within
the scope of the CloudTM
projecthttps://issues.jboss.org/browse/ISPN-3234.
 *
JCache https://issues.jboss.org/browse/ISPN-3234(JSR-107)
implementation upgrade. First released in Infinispan 5.3.0,
the standard caching support is now upgraded to version 1.0.0-PFD.



For a complete list of features included in this release please
refer to the release notes

https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12310799version=12322480.
The user documentation for this release has been revamped and
migrated to the new website
http://infinispan.org/documentation/- we think it looks much
better and hope you'll like it too!
This release has spread over a period of 5 months: a sustained
effort from the core development team, QE team and our growing
community - a BIG thanks to everybody involved! Please visit our
downloads http://infinispan.org/download/section to find the
latest release. Also if you have any questions please check our
forums http://infinispan.org/community/, our mailing lists
https://lists.jboss.org/mailman

[infinispan-dev] Infinispan 6.0.0.Final is out!

2013-11-19 Thread Adrian Nistor

Dear Infinispan community,

We're pleased to announce the final release of Infinispan 6.0 
Infinium. Asannounced 
http://infinispan.blogspot.co.uk/2013/05/infinispan-to-adopt-apache-software.html, 
this is the first Infinispan stable version to be released under the 
terms ofApache License v2.0 
http://www.apache.org/licenses/LICENSE-2.0.html.


This release brings some highly demanded features besides many stability 
enhancements and bug fixes:


 *
   Support for remote query
   
http://blog.infinispan.org/2013/09/embedded-and-remote-query-in-infinispan.html.
   It is now possible for the HotRod clients to query an Infinispan
   grid using a new expressive query DSL. This querying functionality
   is built on top of Apache Lucene and Google Protobuf and lays the
   foundation for storing information and querying an Infinispan server
   in a language neutral manner. The Java HotRod client has already
   been enhanced to support this, the soon-to-be announced C++ HotRod
   client will also contain this functionality (initially for
   write/read, then full blown querying).
 *
   C++ HotRod client.  Allows C++ applications to read and write
   information from an Infinispan server. This is a fully fledged
   HotRod client that is topology (level 2) and consistent hash aware
   (level 3) and will be released in the following days. Some features
   (such as Remote Query and SSL support) will be developed during the
   next iteration so that it maintains feature parity with its Java
   counterpart.
 *
   Better persistence integration. We've revisited the entire cache
   loader API and we're quite pleased with the result: the new
   Persistence API
   
http://blog.infinispan.org/2013/09/new-persistence-api-in-infinispan.htmlbrought
   by Infinispan 6.0 supports parallel iteration of the stored entries,
   reduces the overall serialization overhead and also is aligned with
   the JSR-107 http://jcp.org/en/jsr/detail?id=107specification,
   which makes implementations more portable.

 *
   A more efficient FileCacheStore implementation
   http://blog.infinispan.org/2013/07/faster-file-cache-store-no-extra.html.
   This file store is built with efficiency in mind: it outperforms the
   existing file store with up to 2 levels of magnitude. This comes at
   a cost though, as keys need to be kept  in memory. Thanks to Karsten
   Blees https://github.com/kbleesfor contributing this!
 *
   Support for heterogeneous clusters
   
http://blog.infinispan.org/2013/09/heterogenous-clusters-with-infinispan.htm.
   Up to this release, every member of the cluster owns an equal share
   of the cluster's data. This doesn't work well if one machine is more
   powerful than the other cluster participants. This functionality
   allows specifying the amount of data, compared with the average,
   held by a particular machine.
 *
   A new set of usage and performance statistics
   https://issues.jboss.org/browse/ISPN-2861developed within the
   scope of the CloudTM projecthttps://issues.jboss.org/browse/ISPN-3234.
 *
   JCache https://issues.jboss.org/browse/ISPN-3234(JSR-107)
   implementation upgrade. First released in Infinispan 5.3.0, the
   standard caching support is now upgraded to version 1.0.0-PFD.



For a complete list of features included in this release please refer to 
the release notes 
https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12310799version=12322480.
The user documentation for this release has been revamped and migrated 
to the new website http://infinispan.org/documentation/- we think it 
looks much better and hope you'll like it too!
This release has spread over a period of 5 months: a sustained effort 
from the core development team, QE team and our growing community - a 
BIG thanks to everybody involved! Please visit our downloads 
http://infinispan.org/download/section to find the latest release. 
Also if you have any questions please check our forums 
http://infinispan.org/community/, our mailing lists 
https://lists.jboss.org/mailman/listinfo/infinispan-devor ping us 
directly on IRC irc://irc.freenode.org/infinispan.


Cheers,
Adrian
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Which QueryBuilder ?

2013-10-09 Thread Adrian Nistor
Hi Sanne and Martin,

these come from different products. Would they really appear in the same 
documentation and javadoc? Are usually used in different contexts too, 
although there might be users wanting to use both HS QB and ISPN QB in 
the same context, so to ease their pain I suggest using DslQueryBuilder 
in ISPN. Would that fit?

But if we start on this route should we also do something about 
org.infinispan.query.dsl.Query?

I wish we could do something about 
org.infinispan.configuration.cache.ConfigurationBuilder / 
org.infinispan.client.hotrod.configuration.ConfigurationBuilder and 
several other examples of colliding names but I guess it's too late for 
them.

Adrian

On 10/08/2013 06:32 PM, Martin Gencur wrote:
 Right Sanne,
 it's already starting to be a documentation problem for the product.
 It's really confusing unless you developed one of these APIs and
 precisely know the differences :) Any time there's a code snippet, it
 must contain the package (usually this is not needed), but even then
 it's easy to get confused.

 Martin


 On 3.10.2013 16:48, Sanne Grinovero wrote:
 On 3 October 2013 14:10, Adrian Nistor anis...@redhat.com wrote:
 I know, was just joking. Anyway, I don't see any confusion having two
 classes with the same name.
 It's going to be hard enough to explain to people why we are providing
 two different approaches, if we can't even think of a different name
 to properly highlight the different usage then we have a problem.

 Try figuring the forum / support question I'm using QueryBuilder on
 Infinispan 7.3 and this happens... 

 The javadoc index will have it listed twice - annoying.

 Google for QueryBuilder Infinispan - annoying

 Or try figuring out the documentation:

 # Chapter 5: Queries.
 There are two approaches to run Queries in Infinispan. Either you use
 the QueryBuilder, which provides simple domain oriented properties and
 can work both in embedded and remote mode, or you use the more
 powerfull QueryBuilder.

 # 5.1 QueryBuilder
 blah blah

 # 5.2 QueryBuilder
 blah blah


 If they are different, the should really have different names, even
 just to avoid confusion among ourselves when talking about hem. If you
 feel they're the same, the interesting alternative is to literally
 merge them in one single interface, potentially exposing multiple
 methods.

 Sanne

 On 10/03/2013 02:29 PM, Emmanuel Bernard wrote:
 It's already productized code.

 On Thu 2013-10-03 14:16, Adrian Nistor wrote:
 I would suggest renaming the old one :))

 On 10/02/2013 11:13 PM, Sanne Grinovero wrote:
 It seems we have now 2 different interfaces both names QueryBuilder
 when using Infinispan Query.
 One is coming from Hibernate Search, and represents the classic way
 to build queries for Infinispan Query in embedded mode.

 The other one is new, and represents the simplified approach, also
 implemented for remote queries.

 Could we find an alternative name for the new API?

 It's certainly going to be confusing, even more when we'll have to
 document the differences, and which one is more suited for one use
 cases vs. another.

 Cheers,
 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Warnings vs. Fail Fast

2013-10-04 Thread Adrian Nistor
+1

On 10/04/2013 02:43 AM, Sanne Grinovero wrote:
 Currently if a cache is configured with indexing enabled, but the
 Query module isn't on classpath, you get a simple warning.

 I think this should fail with a configuration validation error; it's
 not just safer but also consistent with many other validations.

 I've created ISPN-3583 and patch is ready.. any good reason to not apply it?

 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Which QueryBuilder ?

2013-10-03 Thread Adrian Nistor
I would suggest renaming the old one :))

On 10/02/2013 11:13 PM, Sanne Grinovero wrote:
 It seems we have now 2 different interfaces both names QueryBuilder
 when using Infinispan Query.
 One is coming from Hibernate Search, and represents the classic way
 to build queries for Infinispan Query in embedded mode.

 The other one is new, and represents the simplified approach, also
 implemented for remote queries.

 Could we find an alternative name for the new API?

 It's certainly going to be confusing, even more when we'll have to
 document the differences, and which one is more suited for one use
 cases vs. another.

 Cheers,
 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Which QueryBuilder ?

2013-10-03 Thread Adrian Nistor
I know, was just joking. Anyway, I don't see any confusion having two 
classes with the same name.

On 10/03/2013 02:29 PM, Emmanuel Bernard wrote:
 It's already productized code.

 On Thu 2013-10-03 14:16, Adrian Nistor wrote:
 I would suggest renaming the old one :))

 On 10/02/2013 11:13 PM, Sanne Grinovero wrote:
 It seems we have now 2 different interfaces both names QueryBuilder
 when using Infinispan Query.
 One is coming from Hibernate Search, and represents the classic way
 to build queries for Infinispan Query in embedded mode.

 The other one is new, and represents the simplified approach, also
 implemented for remote queries.

 Could we find an alternative name for the new API?

 It's certainly going to be confusing, even more when we'll have to
 document the differences, and which one is more suited for one use
 cases vs. another.

 Cheers,
 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Hibernate HQL Parser 1.0.0.Alpha4 released

2013-09-27 Thread Adrian Nistor
Thanks a lot!

On 09/27/2013 05:10 PM, Sanne Grinovero wrote:
 We just tagged and published another alpha for the new ANTLR3 based HQL 
 parser.

 Best,
 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] input JSON - convert - put into ISPN cache ready for Queries

2013-09-26 Thread Adrian Nistor
Tomas, it seems you're looking at remote query over REST (via JSON). 
This has not been implemented yet but we might have it in 6.1.

In 6.0 we already have remote query over Hot Rod implemented (using 
protobuf encoding - much more compact than JSON). Some rough details 
available here: 
https://community.jboss.org/wiki/RemoteQueryDesignInInfinispan
We did not go the bytecode generation of annotated classes route as 
you're probably trying to do. Instead we use a pretty nifty generic 
entity (ProtobufValueWrapper) and a HSearch class bridge 
(ProtobufValueWrapperFieldBridge) - all configured programatically, no 
annotations.

Let's chat tomorrow on #infinispan and see how we may help you.

Cheers

On 09/25/2013 11:13 PM, Tomas Sykora wrote:
 Hi team!

 I need to ask for your help.
 It's connected to the OData endpoint. 
 (https://issues.jboss.org/browse/ISPN-2109) I was thinking about the design 
 etc. and it would be nice to map OData queries to Infinispan queries so 
 clients can get their results based on particular query.

 You know, there is basically not much to do with only schema-less key-value 
 store. Like exposing only values to clients based on their key requests does 
 not fully use OData capabilities.

 So I was thinking about something like that...

 From any client you are sending JSON object (for example a Book, with 
 variables: title, author, description) to OData service and would like to 
 store query-able Book Object value into the cache under some key.

 So you go: JSON -- to query-able Book.class Object -- cache.put(key, 
 bookFromJson);
 Then in pseudo query: get-me-books-filter-description-contains-great IT 
 book-top-N-results -- issue query on cache, get results -- transform 
 returned Books.class into JSON, return to client

 My question is:

 How to transform JSON input, which is in most cases simple String build 
 according to JSON rules, into object, which is query-able and can be put into 
 the cache.

 The thing is that you usually have java class:

 @Indexed
 Book {

 @Filed String title;
 @Filed String author;

 etc. etc.

 I simply don't know how to create an object ready for queries, or even 
 annotated class and instantiate it for further put into the cache.
 I'm discovering this, recently: http://www.jboss.org/javassist

 Or can you see there any other, maybe totally different, approach how to do 
 it?

 THANK YOU very much for any input!
 I'm stuck on this right now... that's why I'm asking for a help.

 Have a nice day all!
 Tomas
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] New blog post: Embedded and remote queries in Infinispan

2013-09-26 Thread Adrian Nistor
Just in case you missed it today: 
http://infinispan.blogspot.com/2013/09/embedded-and-remote-query-in-infinispan.html

Cheers
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Client Hot Rod query tests halting CI runs

2013-09-23 Thread Adrian Nistor
I think I fixed that in a PR that is still waiting: 
https://github.com/infinispan/infinispan/pull/2081

On 09/23/2013 12:58 PM, Anna Manukyan wrote:
 Hi,

 I would like to add that the issue appears for both 
 org.infinispan.client.hotrod.query.HotRodQueryIspnDirectoryTest and 
 org.infinispan.client.hotrod.query.MultiHotRodServerQueryTest tests, 
 specifically the issue appears while starting the hotRod servers ( 
 createHotRodServers(...) ).

 Also I would like to mention that the HotRodQueryIspnDirectoryTest was 
 working properly before the recent commits.

 Regards,
 Anna.

 - Original Message -
 From: Galder Zamarreño gal...@redhat.com
 To: Adrian Nistor anis...@redhat.com
 Cc: infinispan -Dev List infinispan-dev@lists.jboss.org
 Sent: Monday, September 23, 2013 11:19:54 AM
 Subject: Re: [infinispan-dev] Client Hot Rod query tests halting CI runs

 Btw, this is not related to Tristan's PR. Master showing it too:
 http://ci.infinispan.org/viewLog.html?buildId=3309buildTypeId=bt8tab=buildLog

 Cheers,

 On Sep 23, 2013, at 11:17 AM, Galder Zamarreño gal...@redhat.com wrote:

 Hi,

 Re: 
 http://ci.infinispan.org/viewLog.html?buildId=3314buildTypeId=bt9tab=buildLog

 Seems like there's some issues with Hot Rod query test that are leading to 
 testsuite halting. In particular, seems like the issue stops 
 HotRodQueryIspnDirectoryTest from starting up correctly.

 Cheers,
 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz

 Project Lead, Escalante
 http://escalante.io

 Engineer, Infinispan
 http://infinispan.org


 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz

 Project Lead, Escalante
 http://escalante.io

 Engineer, Infinispan
 http://infinispan.org


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Query and dynamic discovery of types

2013-09-20 Thread Adrian Nistor
Could you detail please?

On 09/20/2013 05:43 PM, Sanne Grinovero wrote:
 On 20 September 2013 11:19, Mircea Markus mmar...@redhat.com wrote:
 Thanks for the heads up!

 It's not clear for me what the functional impact of ISPN-2143 is: incomplete 
 query results?
 yes

 On Sep 19, 2013, at 11:42 AM, Sanne Grinovero sa...@infinispan.org wrote:

 For Infinispan 6.0 we decided that the following issue is a bloker:

 https://issues.jboss.org/browse/ISPN-2143 - Improve how different
 indexed caches sync up on new indexed types

 It really is important, and I'm a bit concerned that it was moved to
 CR1 as it might not be trivial: for embedded query it needs to store a
 reference to class definitions (potentially with classloader
 information too), and for remote queries it needs to distribute the
 indexing configuration schema.

 Also I'm wondering if we shouldn't go even further: in Hibernate
 Search the internal complexity of handling newly appearing types
 (rather than statically configured types) is very hard.

 In the next version we might need to drop this, so it would be great
 if we could stabilize the Infinispan Query API right now in such a way
 that it won't need this feature anymore.

 I proposed it years ago but it was rejected because of usability
 concerns.. would you still reject the idea to simply list the indexed
 types at configuration time? If so, I think we need to explore
 alternative solutions.

 Note that I only want to remove the dependency to dynamic _Class_
 definitions being added at runtime; we will still support dynamically
 defined types so in case you need extreme flexibility like with remote
 queries that will work, as long as you provide a smart bridge able to
 reconfigure itself based on the dynamic metadata; I think that's a
 cleaner approach as you would be directly in control of it rather than
 having to workaround a design which was thought for a different
 purpose.

 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 Cheers,
 --
 Mircea Markus
 Infinispan lead (www.infinispan.org)





 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Infinispan Server playing catch up :(

2013-09-13 Thread Adrian Nistor
Hi Galder,

Regarding those two dependencies, the issue appeared after I've made 
infinispan-remote-query-client optional in HotRod client this week (see 
email with subject HotRod client dependencies in 6.0.0.Alpha4 [1]).

To fix this now we need to add infinispan-remote-query-client dependency 
in testsuite/example-configs/pom.xml as you said, but I don't think we 
need to add protostream explicitly because that is a transitive dep of
infinispan-remote-query-client.

So the real problem here is that infinispan-remote-query-client is not 
actually optional. I believe RemoteCacheManager or RemoteCacheImpl still 
has a hard dependency to it. Will investigate asap.

Cheers,
Adrian

[1] http://markmail.org/message/whumtx7qtvzpdnxf

On 09/13/2013 10:30 AM, Galder Zamarreño wrote:
 Hey,

 Infinispan Server CI is failing because REST cache store is not installed. I 
 guess we need to modify the CI script to build REST cache store before hand 
 too? [1]

 Also, while trying to replicate some JIRAs in Server, I've spotted two errors 
 [2] and once that was fixed by adding the dependency to the testsuite pom, 
 then [3]. The fix is simple, just add these dependencies to 
 testsuite/example-configs/pom.xml:

dependency
   groupIdorg.infinispan.protostream/groupId
   artifactIdprotostream/artifactId
/dependency
dependency
   groupIdorg.infinispan/groupId
   artifactIdinfinispan-remote-query-client/artifactId
/dependency

 However, this is very frustrating and slows down resolution of other issues. 
 This fix is something that should have done when remote querying was added, 
 but it wasn't because the CI/PR integration didn't caught it.

 So, we need to rethink CI/PR integration in such way that whenever a PR is 
 sent to infinispan/infinispan, all potentially depending CIs need to run, 
 which are:
 - infinispan/infinispan
 - infinispan/infinispan-server
 - infinispan/cachestore-*

 Until that happens, infinispan/infinispan-server and infinispan/cachestore-* 
 are always gonna be playing catch up :(

 WRT REST cache store dependency miss, when a PR is sent to 
 infinispan/infinispan-server, it probably needs to build (but not test) 
 infinispan/infinispan and infinispan/cachestore-* to make sure all the latest 
 artifacts are available.

 Does this make sense? Is this doable in our TeamCity installation?

 Cheers,

 [1] 
 http://ci.infinispan.org/viewLog.html?buildId=3141buildTypeId=bt11tab=buildLog
 [2] https://gist.github.com/galderz/61985831e87780cb2ca2
 [3] https://gist.github.com/galderz/6897fd8ddfa187754b36
 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz

 Project Lead, Escalante
 http://escalante.io

 Engineer, Infinispan
 http://infinispan.org


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Infinispan Server playing catch up :(

2013-09-13 Thread Adrian Nistor
Ok, regarding today's build failure, please do not add those 
dependencies to server in order to fix it as the proper fix should be in 
infinispan. This PR solves it: 
https://github.com/infinispan/infinispan/pull/2069

Cheers,
Adrian

On 09/13/2013 01:10 PM, Galder Zamarreño wrote:
 On Sep 13, 2013, at 10:05 AM, Adrian Nistor anis...@redhat.com wrote:

 Hi Galder,

 Regarding those two dependencies, the issue appeared after I've made 
 infinispan-remote-query-client optional in HotRod client this week (see 
 email with subject HotRod client dependencies in 6.0.0.Alpha4 [1]).

 To fix this now we need to add infinispan-remote-query-client dependency in 
 testsuite/example-configs/pom.xml as you said, but I don't think we need to 
 add protostream explicitly because that is a transitive dep of
 infinispan-remote-query-client.
 ^ That might cos of the order in which errors appeared.

 Also IIRC, transitive dependencies might disabled in AS-based builds.

 So the real problem here is that infinispan-remote-query-client is not 
 actually optional. I believe RemoteCacheManager or RemoteCacheImpl still has 
 a hard dependency to it. Will investigate asap.
 I think you might be missing the point of my email focusing on the specific 
 issue discovered.

 Regardless of what the problem is underneath, anything that's not 
 infinispan/infinispan is lagging behind when it comes to finding issues with 
 code submitted by people. That's what really needs addressing, indepedent of 
 the individual issues.

 Cheers,

 Cheers,
 Adrian

 [1] http://markmail.org/message/whumtx7qtvzpdnxf

 On 09/13/2013 10:30 AM, Galder Zamarreño wrote:
 Hey,

 Infinispan Server CI is failing because REST cache store is not installed. 
 I guess we need to modify the CI script to build REST cache store before 
 hand too? [1]

 Also, while trying to replicate some JIRAs in Server, I've spotted two 
 errors [2] and once that was fixed by adding the dependency to the 
 testsuite pom, then [3]. The fix is simple, just add these dependencies to 
 testsuite/example-configs/pom.xml:

dependency
   groupIdorg.infinispan.protostream/groupId
   artifactIdprotostream/artifactId
/dependency
dependency
   groupIdorg.infinispan/groupId
   artifactIdinfinispan-remote-query-client/artifactId
/dependency

 However, this is very frustrating and slows down resolution of other 
 issues. This fix is something that should have done when remote querying 
 was added, but it wasn't because the CI/PR integration didn't caught it.

 So, we need to rethink CI/PR integration in such way that whenever a PR is 
 sent to infinispan/infinispan, all potentially depending CIs need to run, 
 which are:
 - infinispan/infinispan
 - infinispan/infinispan-server
 - infinispan/cachestore-*

 Until that happens, infinispan/infinispan-server and 
 infinispan/cachestore-* are always gonna be playing catch up :(

 WRT REST cache store dependency miss, when a PR is sent to 
 infinispan/infinispan-server, it probably needs to build (but not test) 
 infinispan/infinispan and infinispan/cachestore-* to make sure all the 
 latest artifacts are available.

 Does this make sense? Is this doable in our TeamCity installation?

 Cheers,

 [1] 
 http://ci.infinispan.org/viewLog.html?buildId=3141buildTypeId=bt11tab=buildLog
 [2] https://gist.github.com/galderz/61985831e87780cb2ca2
 [3] https://gist.github.com/galderz/6897fd8ddfa187754b36
 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz

 Project Lead, Escalante
 http://escalante.io

 Engineer, Infinispan
 http://infinispan.org


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz

 Project Lead, Escalante
 http://escalante.io

 Engineer, Infinispan
 http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] SNAPSHOT dependencies on protostream

2013-09-12 Thread Adrian Nistor
When issues like HSEARCH-1396 A FieldBridge should be able to easily 
acquire services/resources or HSEARCH-1397 Ability to offer the notion 
of composite bridge get fixed I'd like to be able to update infinispan 
code immediately rather than waiting for the next hsearch alpha release, 
even if that happens in the same day. Or is it more manageable for 
hsearch to make piecemeal alpha releases after each issue is fixed?

On 09/12/2013 05:11 PM, Sanne Grinovero wrote:
 +1
 and maybe you want SNAPSHOT dependencies of Hibernate Search ?

 On 12 September 2013 16:08, Mircea Markus mmar...@redhat.com wrote:
 Hi,

 Whilst having dependencies on SNAPSHOT is not generally good, we do allow 
 SNAPSHOT dependencies between our own components: e.g. cache stores' 
 upstream depend on an SNAPSHOT of Infinispan core. I think the rule should 
 still apply for protostream, i.e. on master, the query should be allowed to 
 depend on protostream-SNAPSHOT, assuming protostream doesn't have external 
 SNAPSHOT dependencies (it doesn't). Otherwise we end up releasing a new 
 protostream whenever e.g. a bug gets fixed and query needs it - way too 
 often.

 Of course we shouldn't allow any SNAPSHOT dependencies during the release.

 Cheers,
 --
 Mircea Markus
 Infinispan lead (www.infinispan.org)





 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] HotRod client dependencies in 6.0.0.Alpha4

2013-09-11 Thread Adrian Nistor
Hi Tristan,

the remote-query dependencies are now optional in this PR: 
https://github.com/infinispan/infinispan/pull/2059

Cheers

On 09/10/2013 03:25 PM, Adrian Nistor wrote:
 I'll solve the ones that result from remote query immediately.

 On 09/10/2013 12:40 PM, Tristan Tarrant wrote:
 Hi all,

 at the beginning of the 6.0.0 cycle I worked to reduce the number of
 dependencies required by the hotrod client. Unfortunately, with the
 introduction of remote querying, the number of dependencies has gone up
 again:

 +- org.infinispan:infinispan-commons:jar:6.0.0.Alpha4:compile
+- 
 org.jboss.marshalling:jboss-marshalling-river:jar:1.3.15.GA:compile
+- commons-pool:commons-pool:jar:1.6:compile
+- org.infinispan:infinispan-query-dsl:jar:6.0.0.Alpha4:compile
+- 
 org.infinispan:infinispan-remote-query-client:jar:6.0.0.Alpha4:compile
| \- com.google.protobuf:protobuf-java:jar:2.5.0:compile
+- log4j:log4j:jar:1.2.16:compile
+- net.jcip:jcip-annotations:jar:1.0:compile

 we should strive to remove the unnecessary ones (jcip, log4j) and make
 the ones needed for remote query optional.

 What do you think ?

 Tristan
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Indexing for remote query

2013-09-04 Thread Adrian Nistor
Hi devs,

Currently remote caches have byte[] values. With the introduction of 
remote query these byte arrays are no longer opaque for search-enabled 
caches, they are Protobuf encoded and their schema is known and we have 
to somehow help hibernate-search extract the indexable data and index 
it. As a side note, the indexing for embedded mode is done by 
QueryInterceptor which relies on hibernate-search annotated cache 
values. This definitely does not work directly for byte[].

One option is to write another interceptor similar to QueryInterceptor 
but capable of making sense of the byte[], but that would duplicate 
functionality (and maintenance) and would have to use Lucene directly 
because in order to use hibernate-search you would need annotations on 
your entities. And if we go this route we might need to also duplicate 
MassIndexer.

A second more preferable option is to wrap the byte[] values into a 
hsearch FieldBridge or ClassBridge annotated entity so we can continue 
to use existing QueryInterceptor. This was implemented in the PR [1], 
the wrapping entity being 
org.infinispan.query.remote.indexing.ProtobufValueWrapper and the place 
where the wrapping happens is a new interceptor, 
org.infinispan.query.remote.indexing.RemoteValueWrapperInterceptor. 
RemoteValueWrapperInterceptor uses a mechanism similar to 
TypeConverterInterceptor (of compat mode), so a common base class was 
extracted for them.

The wrapping approach has received favourable review so far. But I'm 
asking myself (and you) maybe there's an even simpler solution that I've 
overlooked? Or maybe we can do the wrapping somehow without introducing 
a new interceptor?

Thanks!

[1] https://github.com/infinispan/infinispan/pull/2022
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] requirements for the new CacheStore API

2013-07-25 Thread Adrian Nistor
State transfer currently needs to retrieve from cache store just certain 
CH segments but this is not directly possible with current API so we 
have to iterate over the entire set. Would it make sense to add 
retrieval methods that allow a segment or set of segments to be specified?

Adrian

On 07/24/2013 01:55 PM, Mircea Markus wrote:
 Hi,

 Starting from the original document Manik rolled out a while ago [1], here's 
 the list of requirements I'm currently aware of in the context of the new 
 CacheStore API:
 - better integration with the fluent API (CacheStore.init() is horrendous)
 - support for non-distributed transaction cache stores (1PC) and support for 
 XA capable cache store
 - support iteration over all the keys/entries in the store
- needed for efficient Map/Reduce integration
- needed for efficient implementation of Cache.keySet(), Cache.entrySet(), 
 Cache.values() methods
 - a simple read(k) + write(k,v) interface to be implemented by users that 
 just want to position ISPN as a cache between an app and a legacy system and 
 which don't need/want to be bothered with all the other complex features
 - support for expiration notification (ISPN-3064)
 - support for size (efficient implementation of the cache.size() method)

 Re: JSR-107 integration, I don't think we should depend on the JSR-107 API as 
 it forces us to use JSR-107 internal structures[2] but we should at least 
 provide an adapter layer.

 [1] https://community.jboss.org/wiki/CacheLoaderAndCacheStoreSPIRedesign
 [2] 
 https://github.com/jsr107/jsr107spec/blob/v0.8/src/main/java/javax/cache/integration/CacheWriter.java#L59

 Cheers,

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] requirements for the new CacheStore API

2013-07-25 Thread Adrian Nistor
Ah, but that could easily be done with the Filter that was already 
proposed. Pls ignore my previous question :)

On 07/25/2013 04:51 PM, Adrian Nistor wrote:
 State transfer currently needs to retrieve from cache store just certain
 CH segments but this is not directly possible with current API so we
 have to iterate over the entire set. Would it make sense to add
 retrieval methods that allow a segment or set of segments to be specified?

 Adrian

 On 07/24/2013 01:55 PM, Mircea Markus wrote:
 Hi,

 Starting from the original document Manik rolled out a while ago [1], here's 
 the list of requirements I'm currently aware of in the context of the new 
 CacheStore API:
 - better integration with the fluent API (CacheStore.init() is horrendous)
 - support for non-distributed transaction cache stores (1PC) and support for 
 XA capable cache store
 - support iteration over all the keys/entries in the store
 - needed for efficient Map/Reduce integration
 - needed for efficient implementation of Cache.keySet(), 
 Cache.entrySet(), Cache.values() methods
 - a simple read(k) + write(k,v) interface to be implemented by users that 
 just want to position ISPN as a cache between an app and a legacy system and 
 which don't need/want to be bothered with all the other complex features
 - support for expiration notification (ISPN-3064)
 - support for size (efficient implementation of the cache.size() method)

 Re: JSR-107 integration, I don't think we should depend on the JSR-107 API 
 as it forces us to use JSR-107 internal structures[2] but we should at least 
 provide an adapter layer.

 [1] https://community.jboss.org/wiki/CacheLoaderAndCacheStoreSPIRedesign
 [2] 
 https://github.com/jsr107/jsr107spec/blob/v0.8/src/main/java/javax/cache/integration/CacheWriter.java#L59

 Cheers,
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] 6.0.0.Alpha1to be released Monday

2013-07-17 Thread Adrian Nistor
Hi Bela,
The infinispan 6.0.0.Alpha1 artifacts are already released in maven and 
there is not way to update them.

Let's do than in one of the upcoming ispn alphas.

Adrian

On 07/17/2013 09:38 AM, Bela Ban wrote:
 I'm wondering whether it would make sense to release an alpha1 of
 JGroups 3.4 as well, which could be shipped by Infinispan 6.0.0.Alpha1.

 Do you still have time to integrate and test 3.4.0.Alpha1 ?

 Since 6.0.0.Alpha1 is Apache 2.0 license, I think it should ship with
 JGroups 3.4.0.Alpha1 which is also AL 2.0 licensed, and *not* 5.3 which
 is LGPL...

 On 7/16/13 9:15 PM, Adrian Nistor wrote:
 A short update on the release progress.

 The PR's are in since yesterday, the infinispan 6.0.0.Alpha1 artifacts
 were built and deployed to the maven releases repo, but I'm still stuck
 building infinispan-server which fails on my machine and on the release
 machine and mysteriously works in TeamCity. I suspect a (random)
 dependency resolution problem.

 I will sort out the infinispan-server tomorrow and continue the release.

 Adrian

 On 07/12/2013 05:30 PM, Mircea Markus wrote:
 Hi,

 There are some very important pull requests pending:
 - protostream[1] integration in the java hotrod client
 - file cache store
 - advanced stats

 Let's release on Mon given that the above would make it in.

 [1] https://github.com/infinispan/protostream

 Cheers,
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 6.0.0.Alpha1 is out!

2013-07-17 Thread Adrian Nistor

Dear Infinispan community,

We're proud to announce the first Alpha release of Infinispan 6.0.0. 
Starting with this release, Infinispan license is moving to the terms of 
the Apache Software Licence version 2.0 
http://www.apache.org/licenses/LICENSE-2.0.


Besides increased stability (about 30 bug fixes) this release also 
brings several new features:A more efficient FileCacheStore 
implementation (courtesy Karsten Blees)


 * A new set of usage and performance statistics developed within the
   scope of the CloudTM project
 * A new (experimental) marshaller for Hot Rod based on protobuf
   http://code.google.com/p/protobuf/, which will be primarily used
   by the upcoming remote querying feature. Since this has reuse
   potential in other projects it was promoted to an independent
   project named /protostream/
   https://github.com/infinispan/protostream under the Infinispan
   umbrella

For a complete list of features and fixes included in this release 
please refer to the release notes 
https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12310799version=12320762.
Visit our downloads http://www.jboss.org/infinispan/downloads section 
to find the latest release and if you have any questions please check 
our forums http://www.jboss.org/infinispan/forums, our mailing lists 
https://lists.jboss.org/mailman/listinfo/infinispan-dev or ping us 
directly on IRC http://www.blogger.com/null.


Thanks to everyone for their involvement and contribution!

Cheers,
Adrian
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] ProtoStream and ease of use

2013-07-16 Thread Adrian Nistor
By 'that' you mean adding @PSType to each an every field? That works but 
is verbose and generally a PITA.

On 07/16/2013 07:06 PM, Mircea Markus wrote:
 On 15 Jul 2013, at 18:29, Adrian Nistor anis...@redhat.com wrote:

 BaseMessage is not mandatory. It's there to 'help' you implement Message 
 interface, which is optional anyway and trivially simple to implement; it's 
 only a holder for the bag of unknown fields. Ignore that and you loose 
 support for maintaining unknown fields but otherwise everything works fine.

 The class field ordering issue is problematic. Besides that, schema 
 evolution might lead to removed fields so we must be able to skip a range of 
 ids easily. I think the sane (but verbose) way to solve this is to make 
 @PSType mandatory for all fields to be marshalled as you suggested, and 
 PSType should have and 'index' or 'id' parameter. But that's mostly goodbye 
 convention then.
 why wouldn't that work?

 Let's also note that were' striving to be cross language here and on the 
 server side we won't have the user's domain classes available in CP (because 
 they might not even be written in java). So the annotations are not going to 
 help there. We would still need the protobuf file, which could be generated 
 based on the annotated java code as we discussed earlier today, but ... 
 generators kill kittens :( so would try to avoid that.

 Since we want to be cross-language I think we should go schema-first. So 
 maybe we could have the protobuf hand-written and the java classes (pojos) 
 minimally annotated with just the full name of the corresponding protobuf 
 message type. From here everything could work automagically provided that 
 class field names match the field name in the proto file and also field 
 types match so no value convertor needs to be specified. If they don't, then 
 annotate.

 Does this seem a little better?

 On 07/15/2013 06:57 PM, Emmanuel Bernard wrote:

 ProtoStream is like this

 https://github.com/infinispan/protostream/blob/master/core/src/test/java/org/infinispan/protostream/domain/Account.java
 https://github.com/infinispan/protostream/blob/master/core/src/test/java/org/infinispan/protostream/domain/marshallers/AccountMarshaller.java
 (I believe the BaseMessage superclass of Account is optional, not sure).

 A convention based approach would be like this

  package org.infinispan.protostream.domain;

  import org.infinispan.protostream.BaseMessage;

  /**
   * @author ebern...@redhat.com
   */
  public class Account {

 private int id;
 private String description;

 public int getId() {
return id;
 }

 public void setId(int id) {
this.id = id;
 }

 public String getDescription() {
return description;
 }

 public void setDescription(String description) {
this.description = description;
 }

 @Override
 public String toString() {
return Account{ +
  id= + id +
  , description=' + description + '\'' +
  , unknownFieldSet=' + unknownFieldSet + '\'' +
  '}';
 }
  }

 Or let's imagine that we need to make id use a specific protobuf type

  package org.infinispan.protostream.domain;

  import org.infinispan.protostream.BaseMessage;

  /**
   * @author ebern...@redhat.com
   */
  public class Account {

 @PSType(UINT32)
 private int id;
 private String description;

 public int getId() {
return id;
 }

 public void setId(int id) {
this.id = id;
 }

 public String getDescription() {
return description;
 }

 public void setDescription(String description) {
this.description = description;
 }

 @Override
 public String toString() {
return Account{ +
  id= + id +
  , description=' + description + '\'' +
  , unknownFieldSet=' + unknownFieldSet + '\'' +
  '}';
 }
  }

 Note that a concern is that field ordering (in the bytecode) is not 
 guaranteed across VMs and compilation and I believe that is an important 
 factor of ProtoBuf. So somehow we would need a way to express field indexes 
 wich would amke the annotation approach more verbose.

 On Mon 2013-07-15 16:04, Manik Surtani wrote:
 I'm sorry I missed this.  Is there an example of each API somewhere?

 On 15 Jul 2013, at 14:01, Emmanuel Bernard emman...@hibernate.org wrote:

 Mircea, Adrian and I had an IRC chat on ProtoStream and ProtoStuff.

 check out
 http://transcripts.jboss.org/channel/irc.freenode.org/%23infinispan/2013/%23infinispan.2013-07-15.log.html
 starting at 11:00 and finishing at 12:30

 A short summary of what has been discussed:

 - ProtoStream is a good cross-platform solution

Re: [infinispan-dev] protostream library

2013-07-16 Thread Adrian Nistor
Not yet. I still need to write some docs/javadocs for it before it can 
be safely consumed.

On 07/15/2013 06:01 PM, Manik Surtani wrote:
 Blogged about it?  Would be good to see if there is any interest from other 
 projects.  They may well help you develop/maintain it.


 On 15 Jul 2013, at 15:56, Adrian Nistor anis...@redhat.com wrote:

 Hi devs,

 the protostream library that was mentioned in previous emails on this
 list regarding protobuf as a marshaling protocol for remote query now
 has a home, here: https://github.com/infinispan/protostream.

 It is by no means closed and complete, although it already works fine
 for current purposes. The approach is still discussed and you are
 welcome to have a look at it and join the discussion.

 Adrian
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 --
 Manik Surtani
 ma...@jboss.org
 twitter.com/maniksurtani

 Platform Architect, JBoss Data Grid
 http://red.ht/data-grid


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] ProtoStream and ease of use

2013-07-16 Thread Adrian Nistor
 not have annotations or anything directly 
similar is not well constructed. If they do not have it it's because the 
culture of that language does not need it and we could commit the sin of 
creating a C++ api that does not follow the spirit of the language. We 
should probably strive to have conceptual similarity rather than 100% 
syntax similarity in our API across languages.

 Emmanuel

 On Mon 2013-07-15 20:29, Adrian Nistor wrote:
 BaseMessage is not mandatory. It's there to 'help' you implement Message 
 interface, which is optional anyway and trivially simple to implement; it's 
 only a holder for the bag of unknown fields. Ignore that and you loose 
 support for maintaining unknown fields but otherwise everything works fine.

 The class field ordering issue is problematic. Besides that, schema 
 evolution might lead to removed fields so we must be able to skip a range 
 of ids easily. I think the sane (but verbose) way to solve this is to make 
 @PSType mandatory for all fields to be marshalled as you suggested, and 
 PSType should have and 'index' or 'id' parameter. But that's mostly goodbye 
 convention then.

 Let's also note that were' striving to be cross language here and on the 
 server side we won't have the user's domain classes available in CP 
 (because they might not even be written in java). So the annotations are 
 not going to help there. We would still need the protobuf file, which could 
 be generated based on the annotated java code as we discussed earlier 
 today, but ... generators kill kittens :( so would try to avoid that.

 Since we want to be cross-language I think we should go schema-first. So 
 maybe we could have the protobuf hand-written and the java classes (pojos) 
 minimally annotated with just the full name of the corresponding protobuf 
 message type. From here everything could work automagically provided that 
 class field names match the field name in the proto file and also field 
 types match so no value convertor needs to be specified. If they don't, 
 then annotate.

 Does this seem a little better?

 On 07/15/2013 06:57 PM, Emmanuel Bernard wrote:

 ProtoStream is like this

 https://github.com/infinispan/protostream/blob/master/core/src/test/java/org/infinispan/protostream/domain/Account.java
 https://github.com/infinispan/protostream/blob/master/core/src/test/java/org/infinispan/protostream/domain/marshallers/AccountMarshaller.java
 (I believe the BaseMessage superclass of Account is optional, not sure).

 A convention based approach would be like this

 package org.infinispan.protostream.domain;
 import org.infinispan.protostream.BaseMessage;
 /**
  * @author ebern...@redhat.com
  */
 public class Account {
private int id;
private String description;
public int getId() {
   return id;
}
public void setId(int id) {
   this.id = id;
}
public String getDescription() {
   return description;
}
public void setDescription(String description) {
   this.description = description;
}
@Override
public String toString() {
   return Account{ +
 id= + id +
 , description=' + description + '\'' +
 , unknownFieldSet=' + unknownFieldSet + '\'' +
 '}';
}
 }

 Or let's imagine that we need to make id use a specific protobuf type

 package org.infinispan.protostream.domain;
 import org.infinispan.protostream.BaseMessage;
 /**
  * @author ebern...@redhat.com
  */
 public class Account {
@PSType(UINT32)
private int id;
private String description;
public int getId() {
   return id;
}
public void setId(int id) {
   this.id = id;
}
public String getDescription() {
   return description;
}
public void setDescription(String description) {
   this.description = description;
}
@Override
public String toString() {
   return Account{ +
 id= + id +
 , description=' + description + '\'' +
 , unknownFieldSet=' + unknownFieldSet + '\'' +
 '}';
}
 }

 Note that a concern is that field ordering (in the bytecode) is not 
 guaranteed across VMs and compilation and I believe that is an important 
 factor of ProtoBuf. So somehow we would need a way to express field 
 indexes wich would amke the annotation approach more verbose.

 On Mon 2013-07-15 16:04, Manik Surtani wrote:
 I'm sorry I missed this.  Is there an example of each API somewhere?

 On 15 Jul 2013, at 14:01, Emmanuel Bernard emman...@hibernate.org wrote:

 Mircea, Adrian and I had an IRC chat on ProtoStream and ProtoStuff.

 check out
 http://transcripts.jboss.org/channel/irc.freenode.org/%23infinispan/2013/%23infinispan.2013-07-15.log.html
 starting at 11:00

  1   2   >