Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Tristan Tarrant


On 04/08/2013 06:51 PM, Galder Zamarreño wrote:
> ^ That's certainly an option, but it's gotta be extensible (and 
> retrievable), so that server's can build on top of it. For example, 
> REST server might wanna add MIME info on top of it. It's got to be 
> able to extend Metadata concrete class, so that it can be passed in 
> (and of course, be able to retrieve it back), and this is more akward 
> with concrete classes as opposed to interfaces.
Actually I'd like MIME info for HotRod too (or maybe just a way to 
deduct it from the data-type).

Tristan
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Galder Zamarreño

On Apr 8, 2013, at 4:09 PM, Manik Surtani  wrote:

> Tombstones as well as external versioning - something Hibernate 2LC has 
> needed for a while (and Max doesn't ever stop bugging me about!)
> 
> Re: the serialisability, how about this: why make Metadata in interface?  Why 
> not a concrete class, with a fixed set of attributes (lifespan, maxIdle, 
> Version (interface), etc).  Then we ship an externalizer for this Metadata 
> class.

^ That's certainly an option, but it's gotta be extensible (and retrievable), 
so that server's can build on top of it. For example, REST server might wanna 
add MIME info on top of it. It's got to be able to extend Metadata concrete 
class, so that it can be passed in (and of course, be able to retrieve it 
back), and this is more akward with concrete classes as opposed to interfaces.

> 
> - M
> 
> On 8 Apr 2013, at 13:52, Sanne Grinovero  wrote:
> 
>> Got it, thanks!
>> +1 especially as it helps bringing tombstones, an urgent feature IMHO.
>> 
>> Sanne
>> 
>> On 8 April 2013 13:11, Galder Zamarreño  wrote:
>>> 
>>> On Apr 8, 2013, at 1:46 PM, Sanne Grinovero  wrote:
>>> 
 I fail to understand the purpose of the feature then. What prevents me
 to use the existing code today just storing some extra fields in my
 custom values?
>>> 
>>> ^ Nothing, this is doable.
>>> 
 What do we get by adding this code?
>>> 
>>> ^ You avoid the need of the wrapper since we already have a wrapper 
>>> internally, which is ICE. ICEs can already keep versions around, why do I 
>>> need a wrapper class that stores a version for Hot Rod server?
>>> 
>>> Down the line, we could decide to leave the metadata around to better 
>>> support use cases like this:
>>> https://issues.jboss.org/browse/ISPN-506
>>> https://community.jboss.org/wiki/VersioningDesignDocument - The tombstones 
>>> that Max refers to could potentially be tombstone ICEs with only metadata 
>>> info.
>>> 
>>> Cheers,
>>> 
 
 Sanne
 
 On 8 April 2013 12:40, Galder Zamarreño  wrote:
> 
> On Apr 8, 2013, at 1:26 PM, Sanne Grinovero  wrote:
> 
>> On 8 April 2013 12:06, Galder Zamarreño  wrote:
>>> 
>>> On Apr 8, 2013, at 12:56 PM, Sanne Grinovero  
>>> wrote:
>>> 
 
 
 
 On 8 April 2013 11:44, Galder Zamarreño  wrote:
 
 On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  
 wrote:
 
> 
> On Apr 8, 2013, at 11:17 AM, Manik Surtani  
> wrote:
> 
>> All sounds very good. One important thing to consider is that the 
>> reference to Metadata passed in by the client app will be tied to 
>> the ICE for the entire lifespan of the ICE.  You'll need to think 
>> about a defensive copy or some other form of making the Metadata 
>> immutable (by the user application, at least) the moment it is 
>> passed in.
> 
> ^ Excellent point, it could be a nightmare if users could change the 
> metadata reference by the ICE at will. I'll have a think on how to 
> best achieve this.
 
 ^ The metadata is gonna have to be marshalled somehow to ship to other 
 nodes, so that could be a way to achieve it, by enforcing this 
 somehow. When the cache receives it, it can marshaller/unmarshall it 
 to make a copy
 
 One way would be to make Metadata extend Serializable, but not keen on 
 that. Another would be to somehow force the interface to define the 
 Externalizer to use (i.e. an interface method like getExternalizer()), 
 but that's akward when it comes to unmarshalling… what about forcing 
 the Metadata object to be provided with a @SerializeWith annotation?
 
 Why is getExternalizer() awkward for unmarshalling?
>>> 
>>> ^ Because you don't have an instance yet, so what's the Externalizer 
>>> for it? IOW, there's no much point to doing that, simply register it 
>>> depending on your desire:
>>> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>> 
>> That's what I would expect.
>> 
>>> 
 I would expect you to have the marshaller already known during 
 deserialization.
>>> 
>>> You would, as long as you follow the instructions in 
>>> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>>> 
 Agreed that extensing Serializable is not a good idea.
 
 Are you thinking about the impact on CacheStore(s) and state transfer?
>>> 
>>> ^ What about it in particular?
>>> 
 Eviction of no longer used metadata ?
>>> 
>>> ^ Since the metadata is part of the entry, it'd initially go when the 
>>> entry is evicted. We might wanna leave it around in some cases… but 
>>> it'd be for other use cases.
>> 
>> I thought the p

Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
>> Nah, that would also show with standalone testing then, and it doesn't.
> 
> Infinispan generates a gazillion more logging opportunities when
> clustering is enabled.. if it's enabled but hidden, it would still be
> different than standalone.

Ah, ok.
But from Marko's observations, this is not the case.

-Ales

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Sanne Grinovero
On 8 April 2013 15:52, Ales Justin  wrote:
> Nah, that would also show with standalone testing then, and it doesn't.

Infinispan generates a gazillion more logging opportunities when
clustering is enabled.. if it's enabled but hidden, it would still be
different than standalone.

Sanne
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
Nah, that would also show with standalone testing then, and it doesn't.

Marko found out that DataNucleus is pushing DEBUG logs in, dunno yet why.

But, imo, that still shouldn't totally cripple app's performance, yet alone 
kill it at the end.

e.g. I can imagine users trying to debug issues with FINE, in a cluster -- 
that's the whole point of CD
And atm, this is not possible.

-Ales

On Apr 8, 2013, at 4:43 PM, Sanne Grinovero  wrote:

> Is it possible you have the loggers set to a high level of detail but
> only filter the results in the appender configuration?
> In such a setup you would not writ much into the logs but you would
> still have a huge performance penalty as we would still be generating
> the intermediate strings, and Infinispan can be very verbose.
> 
> Sanne
> 
> On 8 April 2013 15:30, Ales Justin  wrote:
 Hmmm, we now disabled CapeDwarf logging, and it runs a lot better.
>>> what does a lot better mean? Does it run as expected?
>> 
>> Querying data on any node returns in ~30ms.
>> And it returns correct data. ;-)
>> 
 We are logging every stuff that goes on,
 and then it's up to user to filter it later -- this is how GAE does it.
 
 The weird thing is that there shouldn't be any fine log traffic, INFO+ 
 level only.
 Marko is looking into this.
 
 But sometimes users will want to have FINE log level,
 meaning a lot more traffic will go into cache.
 And then it still shouldn't kill the app -- as it does now.
>>> Would be good to see the amount of time spent in logging.
>> 
>> I doubt it's a lot.
>> 
>> Imo, it's the amount of stuff that gets put into cache that's the problem.
>> And we need to index it all as well -- for GAE log queries.
>> 
>> But it's still not enormous amount of data.
>> e.g. I'm yet to try GridFS (Ispn's GFS), in a real cluster, for our GAE 
>> Blobstore support ...
>> 
>> -Ales
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Sanne Grinovero
Is it possible you have the loggers set to a high level of detail but
only filter the results in the appender configuration?
In such a setup you would not writ much into the logs but you would
still have a huge performance penalty as we would still be generating
the intermediate strings, and Infinispan can be very verbose.

Sanne

On 8 April 2013 15:30, Ales Justin  wrote:
>>> Hmmm, we now disabled CapeDwarf logging, and it runs a lot better.
>> what does a lot better mean? Does it run as expected?
>
> Querying data on any node returns in ~30ms.
> And it returns correct data. ;-)
>
>>> We are logging every stuff that goes on,
>>> and then it's up to user to filter it later -- this is how GAE does it.
>>>
>>> The weird thing is that there shouldn't be any fine log traffic, INFO+ 
>>> level only.
>>> Marko is looking into this.
>>>
>>> But sometimes users will want to have FINE log level,
>>> meaning a lot more traffic will go into cache.
>>> And then it still shouldn't kill the app -- as it does now.
>> Would be good to see the amount of time spent in logging.
>
> I doubt it's a lot.
>
> Imo, it's the amount of stuff that gets put into cache that's the problem.
> And we need to index it all as well -- for GAE log queries.
>
> But it's still not enormous amount of data.
> e.g. I'm yet to try GridFS (Ispn's GFS), in a real cluster, for our GAE 
> Blobstore support ...
>
> -Ales
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
>> Hmmm, we now disabled CapeDwarf logging, and it runs a lot better.
> what does a lot better mean? Does it run as expected?

Querying data on any node returns in ~30ms.
And it returns correct data. ;-)

>> We are logging every stuff that goes on,
>> and then it's up to user to filter it later -- this is how GAE does it.
>> 
>> The weird thing is that there shouldn't be any fine log traffic, INFO+ level 
>> only.
>> Marko is looking into this.
>> 
>> But sometimes users will want to have FINE log level,
>> meaning a lot more traffic will go into cache.
>> And then it still shouldn't kill the app -- as it does now.
> Would be good to see the amount of time spent in logging.

I doubt it's a lot.

Imo, it's the amount of stuff that gets put into cache that's the problem.
And we need to index it all as well -- for GAE log queries.

But it's still not enormous amount of data.
e.g. I'm yet to try GridFS (Ispn's GFS), in a real cluster, for our GAE 
Blobstore support ... 

-Ales


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Manik Surtani

On 8 Apr 2013, at 14:51, Mircea Markus  wrote:

> 
> On 8 Apr 2013, at 14:38, Ales Justin wrote:
> 
>> Hmmm, we now disabled CapeDwarf logging, and it runs a lot better.
> what does a lot better mean? Does it run as expected?

Yes; better as in, still incorrect, but less so?  ;)

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Platform Architect, JBoss Data Grid
http://red.ht/data-grid

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Manik Surtani
Tombstones as well as external versioning - something Hibernate 2LC has needed 
for a while (and Max doesn't ever stop bugging me about!)

Re: the serialisability, how about this: why make Metadata in interface?  Why 
not a concrete class, with a fixed set of attributes (lifespan, maxIdle, 
Version (interface), etc).  Then we ship an externalizer for this Metadata 
class.

- M

On 8 Apr 2013, at 13:52, Sanne Grinovero  wrote:

> Got it, thanks!
> +1 especially as it helps bringing tombstones, an urgent feature IMHO.
> 
> Sanne
> 
> On 8 April 2013 13:11, Galder Zamarreño  wrote:
>> 
>> On Apr 8, 2013, at 1:46 PM, Sanne Grinovero  wrote:
>> 
>>> I fail to understand the purpose of the feature then. What prevents me
>>> to use the existing code today just storing some extra fields in my
>>> custom values?
>> 
>> ^ Nothing, this is doable.
>> 
>>> What do we get by adding this code?
>> 
>> ^ You avoid the need of the wrapper since we already have a wrapper 
>> internally, which is ICE. ICEs can already keep versions around, why do I 
>> need a wrapper class that stores a version for Hot Rod server?
>> 
>> Down the line, we could decide to leave the metadata around to better 
>> support use cases like this:
>> https://issues.jboss.org/browse/ISPN-506
>> https://community.jboss.org/wiki/VersioningDesignDocument - The tombstones 
>> that Max refers to could potentially be tombstone ICEs with only metadata 
>> info.
>> 
>> Cheers,
>> 
>>> 
>>> Sanne
>>> 
>>> On 8 April 2013 12:40, Galder Zamarreño  wrote:
 
 On Apr 8, 2013, at 1:26 PM, Sanne Grinovero  wrote:
 
> On 8 April 2013 12:06, Galder Zamarreño  wrote:
>> 
>> On Apr 8, 2013, at 12:56 PM, Sanne Grinovero  
>> wrote:
>> 
>>> 
>>> 
>>> 
>>> On 8 April 2013 11:44, Galder Zamarreño  wrote:
>>> 
>>> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
>>> 
 
 On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
 
> All sounds very good. One important thing to consider is that the 
> reference to Metadata passed in by the client app will be tied to the 
> ICE for the entire lifespan of the ICE.  You'll need to think about a 
> defensive copy or some other form of making the Metadata immutable 
> (by the user application, at least) the moment it is passed in.
 
 ^ Excellent point, it could be a nightmare if users could change the 
 metadata reference by the ICE at will. I'll have a think on how to 
 best achieve this.
>>> 
>>> ^ The metadata is gonna have to be marshalled somehow to ship to other 
>>> nodes, so that could be a way to achieve it, by enforcing this somehow. 
>>> When the cache receives it, it can marshaller/unmarshall it to make a 
>>> copy
>>> 
>>> One way would be to make Metadata extend Serializable, but not keen on 
>>> that. Another would be to somehow force the interface to define the 
>>> Externalizer to use (i.e. an interface method like getExternalizer()), 
>>> but that's akward when it comes to unmarshalling… what about forcing 
>>> the Metadata object to be provided with a @SerializeWith annotation?
>>> 
>>> Why is getExternalizer() awkward for unmarshalling?
>> 
>> ^ Because you don't have an instance yet, so what's the Externalizer for 
>> it? IOW, there's no much point to doing that, simply register it 
>> depending on your desire:
>> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
> 
> That's what I would expect.
> 
>> 
>>> I would expect you to have the marshaller already known during 
>>> deserialization.
>> 
>> You would, as long as you follow the instructions in 
>> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>> 
>>> Agreed that extensing Serializable is not a good idea.
>>> 
>>> Are you thinking about the impact on CacheStore(s) and state transfer?
>> 
>> ^ What about it in particular?
>> 
>>> Eviction of no longer used metadata ?
>> 
>> ^ Since the metadata is part of the entry, it'd initially go when the 
>> entry is evicted. We might wanna leave it around in some cases… but it'd 
>> be for other use cases.
> 
> I thought the plan was to have entries refer to the metadata, but that
> different entries sharing the same metadata would point to the same
> instance.
 
 ^ Could be, but most likely not.
 
> So this metadata needs to be stored separately in the CacheStore,
> preloaded as appropriate, transferred during state transfer,
> passivated when convenient and cleaned up when no longer referred to.
 
 ^ Well, it's part of the internal cache entry, so it'd be treated just 
 like ICE.
 
> Am I wrong? Seems you plan to store a copy of the metadata within each 
> ICE

Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Mircea Markus

On 8 Apr 2013, at 14:38, Ales Justin wrote:

> Hmmm, we now disabled CapeDwarf logging, and it runs a lot better.
what does a lot better mean? Does it run as expected?
> 
> We are logging every stuff that goes on,
> and then it's up to user to filter it later -- this is how GAE does it.
> 
> The weird thing is that there shouldn't be any fine log traffic, INFO+ level 
> only.
> Marko is looking into this.
> 
> But sometimes users will want to have FINE log level,
> meaning a lot more traffic will go into cache.
> And then it still shouldn't kill the app -- as it does now.
Would be good to see the amount of time spent in logging.
> 
> -Ales
> 
> On Apr 8, 2013, at 3:12 PM, Ales Justin  wrote:
> 
>> Steps to re-produce:
>> 
>> (1) checkout JBossAS 7.2.0.Final tag --> JBOSS_HOME
>> 
>> (2) build CapeDwarf Shared
>> 
>> https://github.com/capedwarf/capedwarf-shared
>> 
>> (3) build CapeDwarf Blue
>> 
>> https://github.com/capedwarf/capedwarf-blue
>> 
>> (4) build CapeDwarf AS
>> 
>> https://github.com/capedwarf/capedwarf-jboss-as
>> 
>> mvn clean install -Djboss.dir= -Pupdate-as
>> 
>> This will install CapeDwarf Subsystem into previous AS 7.2.0.Final
>> 
>> (5) grab GAE 1.7.6 SDK
>> 
>> http://googleappengine.googlecode.com/files/appengine-java-sdk-1.7.6.zip
>> 
>> (6) Build GAE demos/helloorm2
>> 
>> ant
>> 
>> cd war/
>> 
>> zip -r ROOT.war .
>> 
>> This will zip the demo app as ROOT.war,
>> which you then deploy to AS.
>> 
>> (7) start CapeDwarf
>> 
>> JBOSS_HOME/bin
>> 
>> ./standalone.sh -c standalone-capedwarf.xml -b  
>> -Djboss.node.name=some_name
>> 
>> (8) deploy the app / ROOT.war
>> 
>> ---
>> 
>> Deploy this on a few nodes, goto browser: http://,
>> add a few flights and see how it works.
>> 
>> It now runs a bit better, where we changed mstruk's laptop with luksa's.
>> But we still get replication locks ...
>> 
>> Also, the problem is that query on indexing slave takes waaay tooo long.
>> 
>> Anyway, you'll see. ;-)
>> 
>> Ping me for any issues.
>> 
>> -Ales
>> 
> 

Cheers,
-- 
Mircea Markus
Infinispan lead (www.infinispan.org)





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
Hmmm, we now disabled CapeDwarf logging, and it runs a lot better.

We are logging every stuff that goes on,
and then it's up to user to filter it later -- this is how GAE does it.

The weird thing is that there shouldn't be any fine log traffic, INFO+ level 
only.
Marko is looking into this.

But sometimes users will want to have FINE log level,
meaning a lot more traffic will go into cache.
And then it still shouldn't kill the app -- as it does now.

-Ales

On Apr 8, 2013, at 3:12 PM, Ales Justin  wrote:

> Steps to re-produce:
> 
> (1) checkout JBossAS 7.2.0.Final tag --> JBOSS_HOME
> 
> (2) build CapeDwarf Shared
> 
> https://github.com/capedwarf/capedwarf-shared
> 
> (3) build CapeDwarf Blue
> 
> https://github.com/capedwarf/capedwarf-blue
> 
> (4) build CapeDwarf AS
> 
> https://github.com/capedwarf/capedwarf-jboss-as
> 
> mvn clean install -Djboss.dir= -Pupdate-as
> 
> This will install CapeDwarf Subsystem into previous AS 7.2.0.Final
> 
> (5) grab GAE 1.7.6 SDK
> 
> http://googleappengine.googlecode.com/files/appengine-java-sdk-1.7.6.zip
> 
> (6) Build GAE demos/helloorm2
> 
> ant
> 
> cd war/
> 
> zip -r ROOT.war .
> 
> This will zip the demo app as ROOT.war,
> which you then deploy to AS.
> 
> (7) start CapeDwarf
> 
> JBOSS_HOME/bin
> 
> ./standalone.sh -c standalone-capedwarf.xml -b  
> -Djboss.node.name=some_name
> 
> (8) deploy the app / ROOT.war
> 
> ---
> 
> Deploy this on a few nodes, goto browser: http://,
> add a few flights and see how it works.
> 
> It now runs a bit better, where we changed mstruk's laptop with luksa's.
> But we still get replication locks ...
> 
> Also, the problem is that query on indexing slave takes waaay tooo long.
> 
> Anyway, you'll see. ;-)
> 
> Ping me for any issues.
> 
> -Ales
> 

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
Steps to re-produce:

(1) checkout JBossAS 7.2.0.Final tag --> JBOSS_HOME

(2) build CapeDwarf Shared

https://github.com/capedwarf/capedwarf-shared

(3) build CapeDwarf Blue

https://github.com/capedwarf/capedwarf-blue

(4) build CapeDwarf AS

https://github.com/capedwarf/capedwarf-jboss-as

mvn clean install -Djboss.dir= -Pupdate-as

This will install CapeDwarf Subsystem into previous AS 7.2.0.Final

(5) grab GAE 1.7.6 SDK

http://googleappengine.googlecode.com/files/appengine-java-sdk-1.7.6.zip

(6) Build GAE demos/helloorm2

ant

cd war/

zip -r ROOT.war .

This will zip the demo app as ROOT.war,
which you then deploy to AS.

(7) start CapeDwarf

JBOSS_HOME/bin

./standalone.sh -c standalone-capedwarf.xml -b  -Djboss.node.name=some_name

(8) deploy the app / ROOT.war

---

Deploy this on a few nodes, goto browser: http://,
add a few flights and see how it works.

It now runs a bit better, where we changed mstruk's laptop with luksa's.
But we still get replication locks ...

Also, the problem is that query on indexing slave takes waaay tooo long.

Anyway, you'll see. ;-)

Ping me for any issues.

-Ales

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Sanne Grinovero
Got it, thanks!
+1 especially as it helps bringing tombstones, an urgent feature IMHO.

Sanne

On 8 April 2013 13:11, Galder Zamarreño  wrote:
>
> On Apr 8, 2013, at 1:46 PM, Sanne Grinovero  wrote:
>
>> I fail to understand the purpose of the feature then. What prevents me
>> to use the existing code today just storing some extra fields in my
>> custom values?
>
> ^ Nothing, this is doable.
>
>> What do we get by adding this code?
>
> ^ You avoid the need of the wrapper since we already have a wrapper 
> internally, which is ICE. ICEs can already keep versions around, why do I 
> need a wrapper class that stores a version for Hot Rod server?
>
> Down the line, we could decide to leave the metadata around to better support 
> use cases like this:
> https://issues.jboss.org/browse/ISPN-506
> https://community.jboss.org/wiki/VersioningDesignDocument - The tombstones 
> that Max refers to could potentially be tombstone ICEs with only metadata 
> info.
>
> Cheers,
>
>>
>> Sanne
>>
>> On 8 April 2013 12:40, Galder Zamarreño  wrote:
>>>
>>> On Apr 8, 2013, at 1:26 PM, Sanne Grinovero  wrote:
>>>
 On 8 April 2013 12:06, Galder Zamarreño  wrote:
>
> On Apr 8, 2013, at 12:56 PM, Sanne Grinovero  wrote:
>
>>
>>
>>
>> On 8 April 2013 11:44, Galder Zamarreño  wrote:
>>
>> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
>>
>>>
>>> On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
>>>
 All sounds very good. One important thing to consider is that the 
 reference to Metadata passed in by the client app will be tied to the 
 ICE for the entire lifespan of the ICE.  You'll need to think about a 
 defensive copy or some other form of making the Metadata immutable (by 
 the user application, at least) the moment it is passed in.
>>>
>>> ^ Excellent point, it could be a nightmare if users could change the 
>>> metadata reference by the ICE at will. I'll have a think on how to best 
>>> achieve this.
>>
>> ^ The metadata is gonna have to be marshalled somehow to ship to other 
>> nodes, so that could be a way to achieve it, by enforcing this somehow. 
>> When the cache receives it, it can marshaller/unmarshall it to make a 
>> copy
>>
>> One way would be to make Metadata extend Serializable, but not keen on 
>> that. Another would be to somehow force the interface to define the 
>> Externalizer to use (i.e. an interface method like getExternalizer()), 
>> but that's akward when it comes to unmarshalling… what about forcing the 
>> Metadata object to be provided with a @SerializeWith annotation?
>>
>> Why is getExternalizer() awkward for unmarshalling?
>
> ^ Because you don't have an instance yet, so what's the Externalizer for 
> it? IOW, there's no much point to doing that, simply register it 
> depending on your desire:
> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers

 That's what I would expect.

>
>> I would expect you to have the marshaller already known during 
>> deserialization.
>
> You would, as long as you follow the instructions in 
> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>
>> Agreed that extensing Serializable is not a good idea.
>>
>> Are you thinking about the impact on CacheStore(s) and state transfer?
>
> ^ What about it in particular?
>
>> Eviction of no longer used metadata ?
>
> ^ Since the metadata is part of the entry, it'd initially go when the 
> entry is evicted. We might wanna leave it around in some cases… but it'd 
> be for other use cases.

 I thought the plan was to have entries refer to the metadata, but that
 different entries sharing the same metadata would point to the same
 instance.
>>>
>>> ^ Could be, but most likely not.
>>>
 So this metadata needs to be stored separately in the CacheStore,
 preloaded as appropriate, transferred during state transfer,
 passivated when convenient and cleaned up when no longer referred to.
>>>
>>> ^ Well, it's part of the internal cache entry, so it'd be treated just like 
>>> ICE.
>>>
 Am I wrong? Seems you plan to store a copy of the metadata within each ICE.
>>>
>>> ^ The idea is to store it alongside right now, but maybe at some point it 
>>> might make sense to leave it around (i.e. for 2LC use case), but this won't 
>>> be done yet.
>>>


>
> I'm also considering separating the serialization/marshalling concerns 
> from the defensive copying concerns. IOW, add a copy() method to the 
> Metadata interface, or have a separate interface for those externally 
> provided objects that require to be defensive copied. IOW, do something 
> like what Scala Case Classes do with their copy() method, but without the 
> issues

Re: [infinispan-dev] [ISPN-1797]MongoDB CacheStore

2013-04-08 Thread Sanne Grinovero
Hi Gauillaume,
thanks! make sure you write some comment on github when you do, as
adding a new commit won't send notifications automatically.

After a quick check I see however you didn't address all my comments;
please recheck the history and don't look just at your last commit:
when you pushed the last changes you are implicitly hiding comments on
the commit, so you'll have to read them from the shitory on the pull:
https://github.com/infinispan/infinispan/pull/1473

Sanne

On 8 April 2013 13:17, Guillaume SCHEIBEL  wrote:
> Hello,
>
> Don't know if someone saw it (in doubt, I'm sending this email). I have
> updated (few weeks ago) the pull request about the MongoDB cachestore.
>
> Let me know hat you think about it.
>
> Have a nice day
> Guillaume
>
>
> 2013/1/9 Guillaume SCHEIBEL 
>>
>> Hi everyone,
>>
>> Finally, I made the last (for the moment actually :) ) touch to the
>> mongoDB cache store, the pull request #1473 has been updated.
>> Hope it's better now, let know what do you think about it.
>>
>> Cheers,
>> Guillaume
>
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
New error ...

https://gist.github.com/alesj/5336483

-Ales

On Apr 8, 2013, at 2:25 PM, Ales Justin  wrote:

 Ales is this error happening after a node failure?
>> 
>> No node failure that I'm aware of.
>> 
>> We did get some unexpected NPE in DataNucleus framework,
>> but, imo, that shouldn't completely kill the app.
>> 
>> We'll re-try.
> 
> Now I cannot even open an initial page ... 
> 
> https://gist.github.com/alesj/5336415
> 
> -Ales
> 

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Sanne Grinovero
That's a ReplicationTimeout, I don't think Search has anything to do with it..

On 8 April 2013 13:25, Ales Justin  wrote:
> Ales is this error happening after a node failure?
>
>
> No node failure that I'm aware of.
>
> We did get some unexpected NPE in DataNucleus framework,
> but, imo, that shouldn't completely kill the app.
>
> We'll re-try.
>
>
> Now I cannot even open an initial page ...
>
> https://gist.github.com/alesj/5336415
>
> -Ales
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
>>> Ales is this error happening after a node failure?
> 
> No node failure that I'm aware of.
> 
> We did get some unexpected NPE in DataNucleus framework,
> but, imo, that shouldn't completely kill the app.
> 
> We'll re-try.

Now I cannot even open an initial page ... 

https://gist.github.com/alesj/5336415

-Ales

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] [ISPN-1797]MongoDB CacheStore

2013-04-08 Thread Guillaume SCHEIBEL
Hello,

Don't know if someone saw it (in doubt, I'm sending this email). I have
updated (few weeks ago) the pull request about the MongoDB cachestore.

Let me know hat you think about it.

Have a nice day
Guillaume

2013/1/9 Guillaume SCHEIBEL 

> Hi everyone,
>
> Finally, I made the last (for the moment actually :) ) touch to the
> mongoDB cache store, the pull request #1473 has been updated.
> Hope it's better now, let know what do you think about it.
>
> Cheers,
> Guillaume
>
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Galder Zamarreño

On Apr 8, 2013, at 1:46 PM, Sanne Grinovero  wrote:

> I fail to understand the purpose of the feature then. What prevents me
> to use the existing code today just storing some extra fields in my
> custom values?

^ Nothing, this is doable.

> What do we get by adding this code?

^ You avoid the need of the wrapper since we already have a wrapper internally, 
which is ICE. ICEs can already keep versions around, why do I need a wrapper 
class that stores a version for Hot Rod server?

Down the line, we could decide to leave the metadata around to better support 
use cases like this:
https://issues.jboss.org/browse/ISPN-506
https://community.jboss.org/wiki/VersioningDesignDocument - The tombstones that 
Max refers to could potentially be tombstone ICEs with only metadata info.

Cheers,

> 
> Sanne
> 
> On 8 April 2013 12:40, Galder Zamarreño  wrote:
>> 
>> On Apr 8, 2013, at 1:26 PM, Sanne Grinovero  wrote:
>> 
>>> On 8 April 2013 12:06, Galder Zamarreño  wrote:
 
 On Apr 8, 2013, at 12:56 PM, Sanne Grinovero  wrote:
 
> 
> 
> 
> On 8 April 2013 11:44, Galder Zamarreño  wrote:
> 
> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
> 
>> 
>> On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
>> 
>>> All sounds very good. One important thing to consider is that the 
>>> reference to Metadata passed in by the client app will be tied to the 
>>> ICE for the entire lifespan of the ICE.  You'll need to think about a 
>>> defensive copy or some other form of making the Metadata immutable (by 
>>> the user application, at least) the moment it is passed in.
>> 
>> ^ Excellent point, it could be a nightmare if users could change the 
>> metadata reference by the ICE at will. I'll have a think on how to best 
>> achieve this.
> 
> ^ The metadata is gonna have to be marshalled somehow to ship to other 
> nodes, so that could be a way to achieve it, by enforcing this somehow. 
> When the cache receives it, it can marshaller/unmarshall it to make a copy
> 
> One way would be to make Metadata extend Serializable, but not keen on 
> that. Another would be to somehow force the interface to define the 
> Externalizer to use (i.e. an interface method like getExternalizer()), 
> but that's akward when it comes to unmarshalling… what about forcing the 
> Metadata object to be provided with a @SerializeWith annotation?
> 
> Why is getExternalizer() awkward for unmarshalling?
 
 ^ Because you don't have an instance yet, so what's the Externalizer for 
 it? IOW, there's no much point to doing that, simply register it depending 
 on your desire:
 https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>>> 
>>> That's what I would expect.
>>> 
 
> I would expect you to have the marshaller already known during 
> deserialization.
 
 You would, as long as you follow the instructions in 
 https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
 
> Agreed that extensing Serializable is not a good idea.
> 
> Are you thinking about the impact on CacheStore(s) and state transfer?
 
 ^ What about it in particular?
 
> Eviction of no longer used metadata ?
 
 ^ Since the metadata is part of the entry, it'd initially go when the 
 entry is evicted. We might wanna leave it around in some cases… but it'd 
 be for other use cases.
>>> 
>>> I thought the plan was to have entries refer to the metadata, but that
>>> different entries sharing the same metadata would point to the same
>>> instance.
>> 
>> ^ Could be, but most likely not.
>> 
>>> So this metadata needs to be stored separately in the CacheStore,
>>> preloaded as appropriate, transferred during state transfer,
>>> passivated when convenient and cleaned up when no longer referred to.
>> 
>> ^ Well, it's part of the internal cache entry, so it'd be treated just like 
>> ICE.
>> 
>>> Am I wrong? Seems you plan to store a copy of the metadata within each ICE.
>> 
>> ^ The idea is to store it alongside right now, but maybe at some point it 
>> might make sense to leave it around (i.e. for 2LC use case), but this won't 
>> be done yet.
>> 
>>> 
>>> 
 
 I'm also considering separating the serialization/marshalling concerns 
 from the defensive copying concerns. IOW, add a copy() method to the 
 Metadata interface, or have a separate interface for those externally 
 provided objects that require to be defensive copied. IOW, do something 
 like what Scala Case Classes do with their copy() method, but without the 
 issues of clone… I need to investigate this further to come up with a nice 
 solution.
 
 One positive side to splitting both concerns is speed. A Metadata 
 implementation might have ways to make a copy of itself which are more 
 eff

Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Mircea Markus
On 8 Apr 2013, at 12:19, Sanne Grinovero wrote:

> I do have a cleaner solution with proper lock cleanup routines, but
> these are based on the CAS operation too.. they are failing stress
> tests so I won't commit them yet.

Do you have a branch with a failing CAS test? I could take a look.

Cheers,
-- 
Mircea Markus
Infinispan lead (www.infinispan.org)




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
>> Ales is this error happening after a node failure?

No node failure that I'm aware of.

We did get some unexpected NPE in DataNucleus framework,
but, imo, that shouldn't completely kill the app.

We'll re-try.

And then also re-try with no locking.

> Or make something clever based on JGroups views
> 
> default.locking_strategy = fully.qualified.custom.Implementation

@Bela, Sanne: how would this look like?

As this looks like the best workaround for now -- if this is really the issue.

-Ales

On Apr 8, 2013, at 1:19 PM, Sanne Grinovero  wrote:

> There should be no locking contention at all, that is the whole point
> of using such a backend and forwarding changes to a single node: that
> only a single node ever attempts to acquire this lock. Hence the error
> is a simptom of some previous error, I primarily suspect cluster view
> stability.
> 
> I indeed have committed an experimental alternative backend in
> Infinispan Query (included in master) and another one in Hibernate
> Search (not master but a branch I'm working on);
> the one in Hibernate Search is meant to superseed the others but it's
> not working yet as I need CAS to be working in Infinispan, and this is
> still failing my tests.
> 
> The result of failing CAS is the master election: multiple nodes elect
> themselves, which results in the locking error.
> 
> Ales is this error happening after a node failure? AFAIK the missing
> feature of the JGroups based backend is that it doesn't cleanup stale
> index locks when a master fails; each master node releases the lock as
> soon as possible (as you have set exclusive_index_use=false) but if
> the node is disconnected exactly during the write operation the lock
> will need to be cleaned up forcefully. I would normally expect this to
> be very unlikely but it could be triggered if you have view stability
> problems.
> We could try integrating some kind of force-lock clean operation but
> it's quite tricky to make sure this happens safely.. there is of
> course a purpose for this lock.
> 
> You could try turning off the seatbelt by setting
> 
> default.locking_strategy = none
> 
> Or make something clever based on JGroups views
> 
> default.locking_strategy = fully.qualified.custom.Implementation
> 
> I do have a cleaner solution with proper lock cleanup routines, but
> these are based on the CAS operation too.. they are failing stress
> tests so I won't commit them yet.
> 
> Sanne
> 
> On 8 April 2013 11:38, Manik Surtani  wrote:
>> 
>> On 8 Apr 2013, at 11:28, Ales Justin  wrote:
>> 
>> This "jgroups" backend was there "long" ago.
>> And it was actually us - CD - that fixed it and made use of it.
>> It's no different from static JGroups backed, the only diff that this one
>> elects master automatically.
>> 
>> I can change to Sanne's new Ispn based prototype if it will help.
>> 
>> But - with my limited cluster knowledge - the issue doesn't look to be
>> there.
>> I mean, the msgs get properly routed to indexing master, which just cannot
>> handle locking contention.
>> 
>> 
>> Any thoughts on this, Sanne?
>> 
>> 
>> -Ales
>> 
>> I believe this new backend is WIP in Hibernate Search.  Sanne, didn't you
>> have a prototype in Infinispan's codebase though?
>> 
>> On 5 Apr 2013, at 15:28, Ales Justin  wrote:
>> 
>> are you not using the JGroups backend anymore?
>> 
>> 
>> I'm using that "jgroups" backend, with auto-master election.
>> 
>> these Lock acquisitions are on the index lock, and make me suspect your
>> configuration is no longer applying the pattern we discussed a while back,
>> when you contributed the fixed to the JGroups indexing backend.
>> 
>> Or is it the "Replication timeout for mstruk/capedwarf" which is causing
>> those locking errors?
>> 
>> 
>> No idea.
>> 
>> btw: didn't you say you had some new backend mechanism?
>> Off Infinispan's channel.
>> 
>> -Ales
>> 
>> On 5 April 2013 14:56, Ales Justin  wrote:
>>> 
>>> We're running a GAE HelloOrm2 example app on 3 nodes (3 laptops).
>>> 
>>> Very soon after deploy, we get a never-ending stack of timeouts,
>>> which completely kills the app:
>>> * https://gist.github.com/alesj/5319414
>>> 
>>> I then need to kill the AS in order to get it shutdown.
>>> 
>>> How can this be tuned / fixed?
>>> 
>>> -Ales
>>> 
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> 
>> --
>> Manik Surtani
>> ma...@jboss.org
>> twitter.com/maniksurtani
>> 
>> Platform Architect, JBoss Data Grid
>> http://red.ht/data-grid
>> 
>> _

Re: [infinispan-dev] How to run the testsuite?

2013-04-08 Thread Mircea Markus

On 8 Apr 2013, at 10:02, Manik Surtani wrote:

>> I've upgrade to mvn 2.14
> 
> You mean Surefire 2.14. You had me confused for a bit, since I'm pretty sure 
> we enforce mvn 3.x.  ;)
yep, surefire 2.14 :-)

Cheers,
-- 
Mircea Markus
Infinispan lead (www.infinispan.org)





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Dan Berindei
On Mon, Apr 8, 2013 at 2:36 PM, Galder Zamarreño  wrote:

>
> On Apr 8, 2013, at 1:11 PM, Dan Berindei  wrote:
>
> >
> >
> >
> > On Mon, Apr 8, 2013 at 1:44 PM, Galder Zamarreño 
> wrote:
> >
> > On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
> >
> > >
> > > On Apr 8, 2013, at 11:17 AM, Manik Surtani 
> wrote:
> > >
> > >> All sounds very good. One important thing to consider is that the
> reference to Metadata passed in by the client app will be tied to the ICE
> for the entire lifespan of the ICE.  You'll need to think about a defensive
> copy or some other form of making the Metadata immutable (by the user
> application, at least) the moment it is passed in.
> > >
> > > ^ Excellent point, it could be a nightmare if users could change the
> metadata reference by the ICE at will. I'll have a think on how to best
> achieve this.
> >
> > ^ The metadata is gonna have to be marshalled somehow to ship to other
> nodes, so that could be a way to achieve it, by enforcing this somehow.
> When the cache receives it, it can marshaller/unmarshall it to make a copy
> >
> >
> > If Metadata is just an interface, nothing is stopping the user from
> implementing maxIdle() to return Random.maxLong(). Besides, local caches
> need to support Metadata as well, and we shouldn't force
> serialization/deserialization for local caches.
> >
> > So I think we'd be better off documenting that Metadata objects should
> not change after they are inserted in the cache, just like keys and values.
> >
> >
> > One way would be to make Metadata extend Serializable, but not keen on
> that. Another would be to somehow force the interface to define the
> Externalizer to use (i.e. an interface method like getExternalizer()), but
> that's akward when it comes to unmarshalling… what about forcing the
> Metadata object to be provided with a @SerializeWith annotation?
> >
> > Any other ideas?
> >
> >
> > Why force anything? I think Metadata instances should be treated just
> like keys and values, so they should be able to use Externalizers (via
> @SerializeWith), Serializable, or Externalizable, depending on the user's
> requirements.
>
> ^ I agree.
>
> What do you think of my suggestion in the other email to separate both
> concerns and somehow enforce a copy of the object to be provided instead?
>
>
I wrote my reply before I saw your other email :)

Having said that, I still think enforcing a copy doesn't make sense (see my
other comment).




> >
> >
> > >
> > > Cheers,
> > >
> > >>
> > >> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
> > >>
> > >>> Hi all,
> > >>>
> > >>> As mentioned in
> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html,
> in paralell to the switch to Equivalent* collections, I was also working on
> being able to pass metadata into Infinispan caches. This is done to better
> support the ability to store custom metadata in Infinispan without the need
> of extra wrappers. So, the idea is that InternalCacheEntry instances will
> have a a reference to this Metadata.
> > >>>
> > >>> One of that metadata is version, which I've been using as test bed
> to see if clients could pass succesfully version information via metadata.
> As you already know, Hot Rod requires to store version information. Before,
> this was stored in a class called CacheValue alongside the value itself,
> but the work I've done in [1], this is passed via the new API I've added in
> [2].
> > >>>
> > >>> So, I'd like to get some thoughts on this new API. I hope that with
> these new put/replace versions, we can get rid of the nightmare which is
> all the other put/replace calls taking lifespan and/or maxIdle information.
> In the end, I think there should be two basic puts:
> > >>>
> > >>> - put(K, V)
> > >>> - put(K, V, Metadata)
> > >>>
> > >>> And their equivalents.
> > >>>
> > >>> IMPORTANT NOTE 1: The implementation details are bound to change,
> because the entire Metadata needs to be stored in InternalCacheEntry, not
> just version, lifespan..etc. I'll further develop the implementation once I
> get into adding more metadata, i.e. when working on interoperability with
> REST. So, don't pay too much attention to the implementation itself, focus
> on the AdvancedCache API itself and let's refine that.
> > >>>
> > >>> IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP,
> so please let's avoid discussing it in this email thread. Once I have a
> more final version I'll send an email about it.
> > >>>
> > >>> Apart from working on enhancements to the API, I'm now carry on
> tackling the interoperability work with aim to have an initial version of
> the Embedded <-> Hot Rod interoperability as first step. Once that's in, it
> can be released to get early feedback while the rest of interoperability
> modes are developed.
> > >>>
> > >>> Cheers,
> > >>>
> > >>> [1]
> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d
> > >>> [2]
> https://github.com/galderz/infinispan/commit/a359

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Sanne Grinovero
I fail to understand the purpose of the feature then. What prevents me
to use the existing code today just storing some extra fields in my
custom values? What do we get by adding this code?

Sanne

On 8 April 2013 12:40, Galder Zamarreño  wrote:
>
> On Apr 8, 2013, at 1:26 PM, Sanne Grinovero  wrote:
>
>> On 8 April 2013 12:06, Galder Zamarreño  wrote:
>>>
>>> On Apr 8, 2013, at 12:56 PM, Sanne Grinovero  wrote:
>>>



 On 8 April 2013 11:44, Galder Zamarreño  wrote:

 On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:

>
> On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
>
>> All sounds very good. One important thing to consider is that the 
>> reference to Metadata passed in by the client app will be tied to the 
>> ICE for the entire lifespan of the ICE.  You'll need to think about a 
>> defensive copy or some other form of making the Metadata immutable (by 
>> the user application, at least) the moment it is passed in.
>
> ^ Excellent point, it could be a nightmare if users could change the 
> metadata reference by the ICE at will. I'll have a think on how to best 
> achieve this.

 ^ The metadata is gonna have to be marshalled somehow to ship to other 
 nodes, so that could be a way to achieve it, by enforcing this somehow. 
 When the cache receives it, it can marshaller/unmarshall it to make a copy

 One way would be to make Metadata extend Serializable, but not keen on 
 that. Another would be to somehow force the interface to define the 
 Externalizer to use (i.e. an interface method like getExternalizer()), but 
 that's akward when it comes to unmarshalling… what about forcing the 
 Metadata object to be provided with a @SerializeWith annotation?

 Why is getExternalizer() awkward for unmarshalling?
>>>
>>> ^ Because you don't have an instance yet, so what's the Externalizer for 
>>> it? IOW, there's no much point to doing that, simply register it depending 
>>> on your desire:
>>> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>>
>> That's what I would expect.
>>
>>>
 I would expect you to have the marshaller already known during 
 deserialization.
>>>
>>> You would, as long as you follow the instructions in 
>>> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>>>
 Agreed that extensing Serializable is not a good idea.

 Are you thinking about the impact on CacheStore(s) and state transfer?
>>>
>>> ^ What about it in particular?
>>>
 Eviction of no longer used metadata ?
>>>
>>> ^ Since the metadata is part of the entry, it'd initially go when the entry 
>>> is evicted. We might wanna leave it around in some cases… but it'd be for 
>>> other use cases.
>>
>> I thought the plan was to have entries refer to the metadata, but that
>> different entries sharing the same metadata would point to the same
>> instance.
>
> ^ Could be, but most likely not.
>
>> So this metadata needs to be stored separately in the CacheStore,
>> preloaded as appropriate, transferred during state transfer,
>> passivated when convenient and cleaned up when no longer referred to.
>
> ^ Well, it's part of the internal cache entry, so it'd be treated just like 
> ICE.
>
>> Am I wrong? Seems you plan to store a copy of the metadata within each ICE.
>
> ^ The idea is to store it alongside right now, but maybe at some point it 
> might make sense to leave it around (i.e. for 2LC use case), but this won't 
> be done yet.
>
>>
>>
>>>
>>> I'm also considering separating the serialization/marshalling concerns from 
>>> the defensive copying concerns. IOW, add a copy() method to the Metadata 
>>> interface, or have a separate interface for those externally provided 
>>> objects that require to be defensive copied. IOW, do something like what 
>>> Scala Case Classes do with their copy() method, but without the issues of 
>>> clone… I need to investigate this further to come up with a nice solution.
>>>
>>> One positive side to splitting both concerns is speed. A Metadata 
>>> implementation might have ways to make a copy of itself which are more 
>>> efficient than marshalling/unmarshalling.
>>>
>>> Thoughts?
>>>

 Sanne


 Any other ideas?

>
> Cheers,
>
>>
>> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
>>
>>> Hi all,
>>>
>>> As mentioned in 
>>> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, 
>>> in paralell to the switch to Equivalent* collections, I was also 
>>> working on being able to pass metadata into Infinispan caches. This is 
>>> done to better support the ability to store custom metadata in 
>>> Infinispan without the need of extra wrappers. So, the idea is that 
>>> InternalCacheEntry instances will have a a reference to this Metadata.
>>>
>>> One of that met

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Galder Zamarreño

On Apr 8, 2013, at 1:26 PM, Sanne Grinovero  wrote:

> On 8 April 2013 12:06, Galder Zamarreño  wrote:
>> 
>> On Apr 8, 2013, at 12:56 PM, Sanne Grinovero  wrote:
>> 
>>> 
>>> 
>>> 
>>> On 8 April 2013 11:44, Galder Zamarreño  wrote:
>>> 
>>> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
>>> 
 
 On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
 
> All sounds very good. One important thing to consider is that the 
> reference to Metadata passed in by the client app will be tied to the ICE 
> for the entire lifespan of the ICE.  You'll need to think about a 
> defensive copy or some other form of making the Metadata immutable (by 
> the user application, at least) the moment it is passed in.
 
 ^ Excellent point, it could be a nightmare if users could change the 
 metadata reference by the ICE at will. I'll have a think on how to best 
 achieve this.
>>> 
>>> ^ The metadata is gonna have to be marshalled somehow to ship to other 
>>> nodes, so that could be a way to achieve it, by enforcing this somehow. 
>>> When the cache receives it, it can marshaller/unmarshall it to make a copy
>>> 
>>> One way would be to make Metadata extend Serializable, but not keen on 
>>> that. Another would be to somehow force the interface to define the 
>>> Externalizer to use (i.e. an interface method like getExternalizer()), but 
>>> that's akward when it comes to unmarshalling… what about forcing the 
>>> Metadata object to be provided with a @SerializeWith annotation?
>>> 
>>> Why is getExternalizer() awkward for unmarshalling?
>> 
>> ^ Because you don't have an instance yet, so what's the Externalizer for it? 
>> IOW, there's no much point to doing that, simply register it depending on 
>> your desire:
>> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
> 
> That's what I would expect.
> 
>> 
>>> I would expect you to have the marshaller already known during 
>>> deserialization.
>> 
>> You would, as long as you follow the instructions in 
>> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>> 
>>> Agreed that extensing Serializable is not a good idea.
>>> 
>>> Are you thinking about the impact on CacheStore(s) and state transfer?
>> 
>> ^ What about it in particular?
>> 
>>> Eviction of no longer used metadata ?
>> 
>> ^ Since the metadata is part of the entry, it'd initially go when the entry 
>> is evicted. We might wanna leave it around in some cases… but it'd be for 
>> other use cases.
> 
> I thought the plan was to have entries refer to the metadata, but that
> different entries sharing the same metadata would point to the same
> instance.

^ Could be, but most likely not.

> So this metadata needs to be stored separately in the CacheStore,
> preloaded as appropriate, transferred during state transfer,
> passivated when convenient and cleaned up when no longer referred to.

^ Well, it's part of the internal cache entry, so it'd be treated just like ICE.

> Am I wrong? Seems you plan to store a copy of the metadata within each ICE.

^ The idea is to store it alongside right now, but maybe at some point it might 
make sense to leave it around (i.e. for 2LC use case), but this won't be done 
yet.

> 
> 
>> 
>> I'm also considering separating the serialization/marshalling concerns from 
>> the defensive copying concerns. IOW, add a copy() method to the Metadata 
>> interface, or have a separate interface for those externally provided 
>> objects that require to be defensive copied. IOW, do something like what 
>> Scala Case Classes do with their copy() method, but without the issues of 
>> clone… I need to investigate this further to come up with a nice solution.
>> 
>> One positive side to splitting both concerns is speed. A Metadata 
>> implementation might have ways to make a copy of itself which are more 
>> efficient than marshalling/unmarshalling.
>> 
>> Thoughts?
>> 
>>> 
>>> Sanne
>>> 
>>> 
>>> Any other ideas?
>>> 
 
 Cheers,
 
> 
> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
> 
>> Hi all,
>> 
>> As mentioned in 
>> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, 
>> in paralell to the switch to Equivalent* collections, I was also working 
>> on being able to pass metadata into Infinispan caches. This is done to 
>> better support the ability to store custom metadata in Infinispan 
>> without the need of extra wrappers. So, the idea is that 
>> InternalCacheEntry instances will have a a reference to this Metadata.
>> 
>> One of that metadata is version, which I've been using as test bed to 
>> see if clients could pass succesfully version information via metadata. 
>> As you already know, Hot Rod requires to store version information. 
>> Before, this was stored in a class called CacheValue alongside the value 
>> itself, but the work I've done in [1]

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Galder Zamarreño

On Apr 8, 2013, at 1:11 PM, Dan Berindei  wrote:

> 
> 
> 
> On Mon, Apr 8, 2013 at 1:44 PM, Galder Zamarreño  wrote:
> 
> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
> 
> >
> > On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
> >
> >> All sounds very good. One important thing to consider is that the 
> >> reference to Metadata passed in by the client app will be tied to the ICE 
> >> for the entire lifespan of the ICE.  You'll need to think about a 
> >> defensive copy or some other form of making the Metadata immutable (by the 
> >> user application, at least) the moment it is passed in.
> >
> > ^ Excellent point, it could be a nightmare if users could change the 
> > metadata reference by the ICE at will. I'll have a think on how to best 
> > achieve this.
> 
> ^ The metadata is gonna have to be marshalled somehow to ship to other nodes, 
> so that could be a way to achieve it, by enforcing this somehow. When the 
> cache receives it, it can marshaller/unmarshall it to make a copy
> 
> 
> If Metadata is just an interface, nothing is stopping the user from 
> implementing maxIdle() to return Random.maxLong(). Besides, local caches need 
> to support Metadata as well, and we shouldn't force 
> serialization/deserialization for local caches.
> 
> So I think we'd be better off documenting that Metadata objects should not 
> change after they are inserted in the cache, just like keys and values. 
> 
>  
> One way would be to make Metadata extend Serializable, but not keen on that. 
> Another would be to somehow force the interface to define the Externalizer to 
> use (i.e. an interface method like getExternalizer()), but that's akward when 
> it comes to unmarshalling… what about forcing the Metadata object to be 
> provided with a @SerializeWith annotation?
> 
> Any other ideas?
> 
> 
> Why force anything? I think Metadata instances should be treated just like 
> keys and values, so they should be able to use Externalizers (via 
> @SerializeWith), Serializable, or Externalizable, depending on the user's 
> requirements.

^ I agree. 

What do you think of my suggestion in the other email to separate both concerns 
and somehow enforce a copy of the object to be provided instead?

> 
>  
> >
> > Cheers,
> >
> >>
> >> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
> >>
> >>> Hi all,
> >>>
> >>> As mentioned in 
> >>> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, 
> >>> in paralell to the switch to Equivalent* collections, I was also working 
> >>> on being able to pass metadata into Infinispan caches. This is done to 
> >>> better support the ability to store custom metadata in Infinispan without 
> >>> the need of extra wrappers. So, the idea is that InternalCacheEntry 
> >>> instances will have a a reference to this Metadata.
> >>>
> >>> One of that metadata is version, which I've been using as test bed to see 
> >>> if clients could pass succesfully version information via metadata. As 
> >>> you already know, Hot Rod requires to store version information. Before, 
> >>> this was stored in a class called CacheValue alongside the value itself, 
> >>> but the work I've done in [1], this is passed via the new API I've added 
> >>> in [2].
> >>>
> >>> So, I'd like to get some thoughts on this new API. I hope that with these 
> >>> new put/replace versions, we can get rid of the nightmare which is all 
> >>> the other put/replace calls taking lifespan and/or maxIdle information. 
> >>> In the end, I think there should be two basic puts:
> >>>
> >>> - put(K, V)
> >>> - put(K, V, Metadata)
> >>>
> >>> And their equivalents.
> >>>
> >>> IMPORTANT NOTE 1: The implementation details are bound to change, because 
> >>> the entire Metadata needs to be stored in InternalCacheEntry, not just 
> >>> version, lifespan..etc. I'll further develop the implementation once I 
> >>> get into adding more metadata, i.e. when working on interoperability with 
> >>> REST. So, don't pay too much attention to the implementation itself, 
> >>> focus on the AdvancedCache API itself and let's refine that.
> >>>
> >>> IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP, so 
> >>> please let's avoid discussing it in this email thread. Once I have a more 
> >>> final version I'll send an email about it.
> >>>
> >>> Apart from working on enhancements to the API, I'm now carry on tackling 
> >>> the interoperability work with aim to have an initial version of the 
> >>> Embedded <-> Hot Rod interoperability as first step. Once that's in, it 
> >>> can be released to get early feedback while the rest of interoperability 
> >>> modes are developed.
> >>>
> >>> Cheers,
> >>>
> >>> [1] 
> >>> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d
> >>> [2] 
> >>> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d#L10R313
> >>> --
> >>> Galder Zamarreño
> >>> gal...@redhat.com
> >>> twitter.com/galderz
> >>>
> >>> Project Lead

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Sanne Grinovero
On 8 April 2013 12:06, Galder Zamarreño  wrote:
>
> On Apr 8, 2013, at 12:56 PM, Sanne Grinovero  wrote:
>
>>
>>
>>
>> On 8 April 2013 11:44, Galder Zamarreño  wrote:
>>
>> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
>>
>> >
>> > On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
>> >
>> >> All sounds very good. One important thing to consider is that the 
>> >> reference to Metadata passed in by the client app will be tied to the ICE 
>> >> for the entire lifespan of the ICE.  You'll need to think about a 
>> >> defensive copy or some other form of making the Metadata immutable (by 
>> >> the user application, at least) the moment it is passed in.
>> >
>> > ^ Excellent point, it could be a nightmare if users could change the 
>> > metadata reference by the ICE at will. I'll have a think on how to best 
>> > achieve this.
>>
>> ^ The metadata is gonna have to be marshalled somehow to ship to other 
>> nodes, so that could be a way to achieve it, by enforcing this somehow. When 
>> the cache receives it, it can marshaller/unmarshall it to make a copy
>>
>> One way would be to make Metadata extend Serializable, but not keen on that. 
>> Another would be to somehow force the interface to define the Externalizer 
>> to use (i.e. an interface method like getExternalizer()), but that's akward 
>> when it comes to unmarshalling… what about forcing the Metadata object to be 
>> provided with a @SerializeWith annotation?
>>
>> Why is getExternalizer() awkward for unmarshalling?
>
> ^ Because you don't have an instance yet, so what's the Externalizer for it? 
> IOW, there's no much point to doing that, simply register it depending on 
> your desire:
> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers

That's what I would expect.

>
>> I would expect you to have the marshaller already known during 
>> deserialization.
>
> You would, as long as you follow the instructions in 
> https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers
>
>> Agreed that extensing Serializable is not a good idea.
>>
>> Are you thinking about the impact on CacheStore(s) and state transfer?
>
> ^ What about it in particular?
>
>> Eviction of no longer used metadata ?
>
> ^ Since the metadata is part of the entry, it'd initially go when the entry 
> is evicted. We might wanna leave it around in some cases… but it'd be for 
> other use cases.

I thought the plan was to have entries refer to the metadata, but that
different entries sharing the same metadata would point to the same
instance.
So this metadata needs to be stored separately in the CacheStore,
preloaded as appropriate, transferred during state transfer,
passivated when convenient and cleaned up when no longer referred to.

Am I wrong? Seems you plan to store a copy of the metadata within each ICE.


>
> I'm also considering separating the serialization/marshalling concerns from 
> the defensive copying concerns. IOW, add a copy() method to the Metadata 
> interface, or have a separate interface for those externally provided objects 
> that require to be defensive copied. IOW, do something like what Scala Case 
> Classes do with their copy() method, but without the issues of clone… I need 
> to investigate this further to come up with a nice solution.
>
> One positive side to splitting both concerns is speed. A Metadata 
> implementation might have ways to make a copy of itself which are more 
> efficient than marshalling/unmarshalling.
>
> Thoughts?
>
>>
>> Sanne
>>
>>
>> Any other ideas?
>>
>> >
>> > Cheers,
>> >
>> >>
>> >> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
>> >>
>> >>> Hi all,
>> >>>
>> >>> As mentioned in 
>> >>> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, 
>> >>> in paralell to the switch to Equivalent* collections, I was also working 
>> >>> on being able to pass metadata into Infinispan caches. This is done to 
>> >>> better support the ability to store custom metadata in Infinispan 
>> >>> without the need of extra wrappers. So, the idea is that 
>> >>> InternalCacheEntry instances will have a a reference to this Metadata.
>> >>>
>> >>> One of that metadata is version, which I've been using as test bed to 
>> >>> see if clients could pass succesfully version information via metadata. 
>> >>> As you already know, Hot Rod requires to store version information. 
>> >>> Before, this was stored in a class called CacheValue alongside the value 
>> >>> itself, but the work I've done in [1], this is passed via the new API 
>> >>> I've added in [2].
>> >>>
>> >>> So, I'd like to get some thoughts on this new API. I hope that with 
>> >>> these new put/replace versions, we can get rid of the nightmare which is 
>> >>> all the other put/replace calls taking lifespan and/or maxIdle 
>> >>> information. In the end, I think there should be two basic puts:
>> >>>
>> >>> - put(K, V)
>> >>> - put(K, V, Metadata)
>> >>>
>> >>> And their equivalents.
>> >>>

Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Sanne Grinovero
There should be no locking contention at all, that is the whole point
of using such a backend and forwarding changes to a single node: that
only a single node ever attempts to acquire this lock. Hence the error
is a simptom of some previous error, I primarily suspect cluster view
stability.

I indeed have committed an experimental alternative backend in
Infinispan Query (included in master) and another one in Hibernate
Search (not master but a branch I'm working on);
the one in Hibernate Search is meant to superseed the others but it's
not working yet as I need CAS to be working in Infinispan, and this is
still failing my tests.

The result of failing CAS is the master election: multiple nodes elect
themselves, which results in the locking error.

Ales is this error happening after a node failure? AFAIK the missing
feature of the JGroups based backend is that it doesn't cleanup stale
index locks when a master fails; each master node releases the lock as
soon as possible (as you have set exclusive_index_use=false) but if
the node is disconnected exactly during the write operation the lock
will need to be cleaned up forcefully. I would normally expect this to
be very unlikely but it could be triggered if you have view stability
problems.
We could try integrating some kind of force-lock clean operation but
it's quite tricky to make sure this happens safely.. there is of
course a purpose for this lock.

You could try turning off the seatbelt by setting

default.locking_strategy = none

Or make something clever based on JGroups views

default.locking_strategy = fully.qualified.custom.Implementation

I do have a cleaner solution with proper lock cleanup routines, but
these are based on the CAS operation too.. they are failing stress
tests so I won't commit them yet.

Sanne

On 8 April 2013 11:38, Manik Surtani  wrote:
>
> On 8 Apr 2013, at 11:28, Ales Justin  wrote:
>
> This "jgroups" backend was there "long" ago.
> And it was actually us - CD - that fixed it and made use of it.
> It's no different from static JGroups backed, the only diff that this one
> elects master automatically.
>
> I can change to Sanne's new Ispn based prototype if it will help.
>
> But - with my limited cluster knowledge - the issue doesn't look to be
> there.
> I mean, the msgs get properly routed to indexing master, which just cannot
> handle locking contention.
>
>
> Any thoughts on this, Sanne?
>
>
> -Ales
>
> I believe this new backend is WIP in Hibernate Search.  Sanne, didn't you
> have a prototype in Infinispan's codebase though?
>
> On 5 Apr 2013, at 15:28, Ales Justin  wrote:
>
> are you not using the JGroups backend anymore?
>
>
> I'm using that "jgroups" backend, with auto-master election.
>
> these Lock acquisitions are on the index lock, and make me suspect your
> configuration is no longer applying the pattern we discussed a while back,
> when you contributed the fixed to the JGroups indexing backend.
>
> Or is it the "Replication timeout for mstruk/capedwarf" which is causing
> those locking errors?
>
>
> No idea.
>
> btw: didn't you say you had some new backend mechanism?
> Off Infinispan's channel.
>
> -Ales
>
> On 5 April 2013 14:56, Ales Justin  wrote:
>>
>> We're running a GAE HelloOrm2 example app on 3 nodes (3 laptops).
>>
>> Very soon after deploy, we get a never-ending stack of timeouts,
>> which completely kills the app:
>> * https://gist.github.com/alesj/5319414
>>
>> I then need to kill the AS in order to get it shutdown.
>>
>> How can this be tuned / fixed?
>>
>> -Ales
>>
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Manik Surtani
> ma...@jboss.org
> twitter.com/maniksurtani
>
> Platform Architect, JBoss Data Grid
> http://red.ht/data-grid
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Manik Surtani
> ma...@jboss.org
> twitter.com/maniksurtani
>
> Platform Architect, JBoss Data Grid
> http://red.ht/data-grid
>
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/lis

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Dan Berindei
On Mon, Apr 8, 2013 at 1:44 PM, Galder Zamarreño  wrote:

>
> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
>
> >
> > On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
> >
> >> All sounds very good. One important thing to consider is that the
> reference to Metadata passed in by the client app will be tied to the ICE
> for the entire lifespan of the ICE.  You'll need to think about a defensive
> copy or some other form of making the Metadata immutable (by the user
> application, at least) the moment it is passed in.
> >
> > ^ Excellent point, it could be a nightmare if users could change the
> metadata reference by the ICE at will. I'll have a think on how to best
> achieve this.
>
> ^ The metadata is gonna have to be marshalled somehow to ship to other
> nodes, so that could be a way to achieve it, by enforcing this somehow.
> When the cache receives it, it can marshaller/unmarshall it to make a copy
>
>
If Metadata is just an interface, nothing is stopping the user from
implementing maxIdle() to return Random.maxLong(). Besides, local caches
need to support Metadata as well, and we shouldn't force
serialization/deserialization for local caches.

So I think we'd be better off documenting that Metadata objects should not
change after they are inserted in the cache, just like keys and values.



> One way would be to make Metadata extend Serializable, but not keen on
> that. Another would be to somehow force the interface to define the
> Externalizer to use (i.e. an interface method like getExternalizer()), but
> that's akward when it comes to unmarshalling… what about forcing the
> Metadata object to be provided with a @SerializeWith annotation?
>
> Any other ideas?
>
>
Why force anything? I think Metadata instances should be treated just like
keys and values, so they should be able to use Externalizers (via
@SerializeWith), Serializable, or Externalizable, depending on the user's
requirements.



> >
> > Cheers,
> >
> >>
> >> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
> >>
> >>> Hi all,
> >>>
> >>> As mentioned in
> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html,
> in paralell to the switch to Equivalent* collections, I was also working on
> being able to pass metadata into Infinispan caches. This is done to better
> support the ability to store custom metadata in Infinispan without the need
> of extra wrappers. So, the idea is that InternalCacheEntry instances will
> have a a reference to this Metadata.
> >>>
> >>> One of that metadata is version, which I've been using as test bed to
> see if clients could pass succesfully version information via metadata. As
> you already know, Hot Rod requires to store version information. Before,
> this was stored in a class called CacheValue alongside the value itself,
> but the work I've done in [1], this is passed via the new API I've added in
> [2].
> >>>
> >>> So, I'd like to get some thoughts on this new API. I hope that with
> these new put/replace versions, we can get rid of the nightmare which is
> all the other put/replace calls taking lifespan and/or maxIdle information.
> In the end, I think there should be two basic puts:
> >>>
> >>> - put(K, V)
> >>> - put(K, V, Metadata)
> >>>
> >>> And their equivalents.
> >>>
> >>> IMPORTANT NOTE 1: The implementation details are bound to change,
> because the entire Metadata needs to be stored in InternalCacheEntry, not
> just version, lifespan..etc. I'll further develop the implementation once I
> get into adding more metadata, i.e. when working on interoperability with
> REST. So, don't pay too much attention to the implementation itself, focus
> on the AdvancedCache API itself and let's refine that.
> >>>
> >>> IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP,
> so please let's avoid discussing it in this email thread. Once I have a
> more final version I'll send an email about it.
> >>>
> >>> Apart from working on enhancements to the API, I'm now carry on
> tackling the interoperability work with aim to have an initial version of
> the Embedded <-> Hot Rod interoperability as first step. Once that's in, it
> can be released to get early feedback while the rest of interoperability
> modes are developed.
> >>>
> >>> Cheers,
> >>>
> >>> [1]
> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d
> >>> [2]
> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d#L10R313
> >>> --
> >>> Galder Zamarreño
> >>> gal...@redhat.com
> >>> twitter.com/galderz
> >>>
> >>> Project Lead, Escalante
> >>> http://escalante.io
> >>>
> >>> Engineer, Infinispan
> >>> http://infinispan.org
> >>>
> >>>
> >>> ___
> >>> infinispan-dev mailing list
> >>> infinispan-dev@lists.jboss.org
> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>
> >> --
> >> Manik Surtani
> >> ma...@jboss.org
> >> twitter.com/maniksurtani
> >>
> >> Platform Architect, JBoss Data Grid
> >>

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Galder Zamarreño

On Apr 8, 2013, at 12:56 PM, Sanne Grinovero  wrote:

> 
> 
> 
> On 8 April 2013 11:44, Galder Zamarreño  wrote:
> 
> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
> 
> >
> > On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
> >
> >> All sounds very good. One important thing to consider is that the 
> >> reference to Metadata passed in by the client app will be tied to the ICE 
> >> for the entire lifespan of the ICE.  You'll need to think about a 
> >> defensive copy or some other form of making the Metadata immutable (by the 
> >> user application, at least) the moment it is passed in.
> >
> > ^ Excellent point, it could be a nightmare if users could change the 
> > metadata reference by the ICE at will. I'll have a think on how to best 
> > achieve this.
> 
> ^ The metadata is gonna have to be marshalled somehow to ship to other nodes, 
> so that could be a way to achieve it, by enforcing this somehow. When the 
> cache receives it, it can marshaller/unmarshall it to make a copy
> 
> One way would be to make Metadata extend Serializable, but not keen on that. 
> Another would be to somehow force the interface to define the Externalizer to 
> use (i.e. an interface method like getExternalizer()), but that's akward when 
> it comes to unmarshalling… what about forcing the Metadata object to be 
> provided with a @SerializeWith annotation?
> 
> Why is getExternalizer() awkward for unmarshalling?

^ Because you don't have an instance yet, so what's the Externalizer for it? 
IOW, there's no much point to doing that, simply register it depending on your 
desire:
https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers

> I would expect you to have the marshaller already known during 
> deserialization.

You would, as long as you follow the instructions in 
https://docs.jboss.org/author/display/ISPN/Plugging+Infinispan+With+User+Defined+Externalizers

> Agreed that extensing Serializable is not a good idea.
> 
> Are you thinking about the impact on CacheStore(s) and state transfer?

^ What about it in particular?

> Eviction of no longer used metadata ?

^ Since the metadata is part of the entry, it'd initially go when the entry is 
evicted. We might wanna leave it around in some cases… but it'd be for other 
use cases.

I'm also considering separating the serialization/marshalling concerns from the 
defensive copying concerns. IOW, add a copy() method to the Metadata interface, 
or have a separate interface for those externally provided objects that require 
to be defensive copied. IOW, do something like what Scala Case Classes do with 
their copy() method, but without the issues of clone… I need to investigate 
this further to come up with a nice solution.

One positive side to splitting both concerns is speed. A Metadata 
implementation might have ways to make a copy of itself which are more 
efficient than marshalling/unmarshalling.

Thoughts?

> 
> Sanne
>  
> 
> Any other ideas?
> 
> >
> > Cheers,
> >
> >>
> >> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
> >>
> >>> Hi all,
> >>>
> >>> As mentioned in 
> >>> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, 
> >>> in paralell to the switch to Equivalent* collections, I was also working 
> >>> on being able to pass metadata into Infinispan caches. This is done to 
> >>> better support the ability to store custom metadata in Infinispan without 
> >>> the need of extra wrappers. So, the idea is that InternalCacheEntry 
> >>> instances will have a a reference to this Metadata.
> >>>
> >>> One of that metadata is version, which I've been using as test bed to see 
> >>> if clients could pass succesfully version information via metadata. As 
> >>> you already know, Hot Rod requires to store version information. Before, 
> >>> this was stored in a class called CacheValue alongside the value itself, 
> >>> but the work I've done in [1], this is passed via the new API I've added 
> >>> in [2].
> >>>
> >>> So, I'd like to get some thoughts on this new API. I hope that with these 
> >>> new put/replace versions, we can get rid of the nightmare which is all 
> >>> the other put/replace calls taking lifespan and/or maxIdle information. 
> >>> In the end, I think there should be two basic puts:
> >>>
> >>> - put(K, V)
> >>> - put(K, V, Metadata)
> >>>
> >>> And their equivalents.
> >>>
> >>> IMPORTANT NOTE 1: The implementation details are bound to change, because 
> >>> the entire Metadata needs to be stored in InternalCacheEntry, not just 
> >>> version, lifespan..etc. I'll further develop the implementation once I 
> >>> get into adding more metadata, i.e. when working on interoperability with 
> >>> REST. So, don't pay too much attention to the implementation itself, 
> >>> focus on the AdvancedCache API itself and let's refine that.
> >>>
> >>> IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP, so 
> >>> please let's avoid discussing it in this email thread. Once I have a 

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Sanne Grinovero
On 8 April 2013 11:44, Galder Zamarreño  wrote:

>
> On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:
>
> >
> > On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
> >
> >> All sounds very good. One important thing to consider is that the
> reference to Metadata passed in by the client app will be tied to the ICE
> for the entire lifespan of the ICE.  You'll need to think about a defensive
> copy or some other form of making the Metadata immutable (by the user
> application, at least) the moment it is passed in.
> >
> > ^ Excellent point, it could be a nightmare if users could change the
> metadata reference by the ICE at will. I'll have a think on how to best
> achieve this.
>
> ^ The metadata is gonna have to be marshalled somehow to ship to other
> nodes, so that could be a way to achieve it, by enforcing this somehow.
> When the cache receives it, it can marshaller/unmarshall it to make a copy
>
> One way would be to make Metadata extend Serializable, but not keen on
> that. Another would be to somehow force the interface to define the
> Externalizer to use (i.e. an interface method like getExternalizer()), but
> that's akward when it comes to unmarshalling… what about forcing the
> Metadata object to be provided with a @SerializeWith annotation?
>

Why is getExternalizer() awkward for unmarshalling? I would expect you to
have the marshaller already known during deserialization.
Agreed that extensing Serializable is not a good idea.

Are you thinking about the impact on CacheStore(s) and state transfer?
Eviction of no longer used metadata ?

Sanne


>
> Any other ideas?
>
> >
> > Cheers,
> >
> >>
> >> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
> >>
> >>> Hi all,
> >>>
> >>> As mentioned in
> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html,
> in paralell to the switch to Equivalent* collections, I was also working on
> being able to pass metadata into Infinispan caches. This is done to better
> support the ability to store custom metadata in Infinispan without the need
> of extra wrappers. So, the idea is that InternalCacheEntry instances will
> have a a reference to this Metadata.
> >>>
> >>> One of that metadata is version, which I've been using as test bed to
> see if clients could pass succesfully version information via metadata. As
> you already know, Hot Rod requires to store version information. Before,
> this was stored in a class called CacheValue alongside the value itself,
> but the work I've done in [1], this is passed via the new API I've added in
> [2].
> >>>
> >>> So, I'd like to get some thoughts on this new API. I hope that with
> these new put/replace versions, we can get rid of the nightmare which is
> all the other put/replace calls taking lifespan and/or maxIdle information.
> In the end, I think there should be two basic puts:
> >>>
> >>> - put(K, V)
> >>> - put(K, V, Metadata)
> >>>
> >>> And their equivalents.
> >>>
> >>> IMPORTANT NOTE 1: The implementation details are bound to change,
> because the entire Metadata needs to be stored in InternalCacheEntry, not
> just version, lifespan..etc. I'll further develop the implementation once I
> get into adding more metadata, i.e. when working on interoperability with
> REST. So, don't pay too much attention to the implementation itself, focus
> on the AdvancedCache API itself and let's refine that.
> >>>
> >>> IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP,
> so please let's avoid discussing it in this email thread. Once I have a
> more final version I'll send an email about it.
> >>>
> >>> Apart from working on enhancements to the API, I'm now carry on
> tackling the interoperability work with aim to have an initial version of
> the Embedded <-> Hot Rod interoperability as first step. Once that's in, it
> can be released to get early feedback while the rest of interoperability
> modes are developed.
> >>>
> >>> Cheers,
> >>>
> >>> [1]
> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d
> >>> [2]
> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d#L10R313
> >>> --
> >>> Galder Zamarreño
> >>> gal...@redhat.com
> >>> twitter.com/galderz
> >>>
> >>> Project Lead, Escalante
> >>> http://escalante.io
> >>>
> >>> Engineer, Infinispan
> >>> http://infinispan.org
> >>>
> >>>
> >>> ___
> >>> infinispan-dev mailing list
> >>> infinispan-dev@lists.jboss.org
> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>
> >> --
> >> Manik Surtani
> >> ma...@jboss.org
> >> twitter.com/maniksurtani
> >>
> >> Platform Architect, JBoss Data Grid
> >> http://red.ht/data-grid
> >>
> >>
> >> ___
> >> infinispan-dev mailing list
> >> infinispan-dev@lists.jboss.org
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> > --
> > Galder Zamarreño
> > gal...@redhat.com
> > twitter.com/galderz
> >
> > Project Lead, Escalante
>

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Galder Zamarreño

On Apr 8, 2013, at 12:35 PM, Galder Zamarreño  wrote:

> 
> On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:
> 
>> All sounds very good. One important thing to consider is that the reference 
>> to Metadata passed in by the client app will be tied to the ICE for the 
>> entire lifespan of the ICE.  You'll need to think about a defensive copy or 
>> some other form of making the Metadata immutable (by the user application, 
>> at least) the moment it is passed in.
> 
> ^ Excellent point, it could be a nightmare if users could change the metadata 
> reference by the ICE at will. I'll have a think on how to best achieve this.

^ The metadata is gonna have to be marshalled somehow to ship to other nodes, 
so that could be a way to achieve it, by enforcing this somehow. When the cache 
receives it, it can marshaller/unmarshall it to make a copy

One way would be to make Metadata extend Serializable, but not keen on that. 
Another would be to somehow force the interface to define the Externalizer to 
use (i.e. an interface method like getExternalizer()), but that's akward when 
it comes to unmarshalling… what about forcing the Metadata object to be 
provided with a @SerializeWith annotation?

Any other ideas?

> 
> Cheers,
> 
>> 
>> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
>> 
>>> Hi all,
>>> 
>>> As mentioned in 
>>> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, in 
>>> paralell to the switch to Equivalent* collections, I was also working on 
>>> being able to pass metadata into Infinispan caches. This is done to better 
>>> support the ability to store custom metadata in Infinispan without the need 
>>> of extra wrappers. So, the idea is that InternalCacheEntry instances will 
>>> have a a reference to this Metadata.
>>> 
>>> One of that metadata is version, which I've been using as test bed to see 
>>> if clients could pass succesfully version information via metadata. As you 
>>> already know, Hot Rod requires to store version information. Before, this 
>>> was stored in a class called CacheValue alongside the value itself, but the 
>>> work I've done in [1], this is passed via the new API I've added in [2].
>>> 
>>> So, I'd like to get some thoughts on this new API. I hope that with these 
>>> new put/replace versions, we can get rid of the nightmare which is all the 
>>> other put/replace calls taking lifespan and/or maxIdle information. In the 
>>> end, I think there should be two basic puts:
>>> 
>>> - put(K, V)
>>> - put(K, V, Metadata)
>>> 
>>> And their equivalents.
>>> 
>>> IMPORTANT NOTE 1: The implementation details are bound to change, because 
>>> the entire Metadata needs to be stored in InternalCacheEntry, not just 
>>> version, lifespan..etc. I'll further develop the implementation once I get 
>>> into adding more metadata, i.e. when working on interoperability with REST. 
>>> So, don't pay too much attention to the implementation itself, focus on the 
>>> AdvancedCache API itself and let's refine that.
>>> 
>>> IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP, so 
>>> please let's avoid discussing it in this email thread. Once I have a more 
>>> final version I'll send an email about it.
>>> 
>>> Apart from working on enhancements to the API, I'm now carry on tackling 
>>> the interoperability work with aim to have an initial version of the 
>>> Embedded <-> Hot Rod interoperability as first step. Once that's in, it can 
>>> be released to get early feedback while the rest of interoperability modes 
>>> are developed.
>>> 
>>> Cheers,
>>> 
>>> [1] 
>>> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d
>>> [2] 
>>> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d#L10R313
>>> --
>>> Galder Zamarreño
>>> gal...@redhat.com
>>> twitter.com/galderz
>>> 
>>> Project Lead, Escalante
>>> http://escalante.io
>>> 
>>> Engineer, Infinispan
>>> http://infinispan.org
>>> 
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> --
>> Manik Surtani
>> ma...@jboss.org
>> twitter.com/maniksurtani
>> 
>> Platform Architect, JBoss Data Grid
>> http://red.ht/data-grid
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
> 
> Project Lead, Escalante
> http://escalante.io
> 
> Engineer, Infinispan
> http://infinispan.org
> 


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Manik Surtani

On 8 Apr 2013, at 11:28, Ales Justin  wrote:

> This "jgroups" backend was there "long" ago.
> And it was actually us - CD - that fixed it and made use of it.
> It's no different from static JGroups backed, the only diff that this one 
> elects master automatically.
> 
> I can change to Sanne's new Ispn based prototype if it will help.
> 
> But - with my limited cluster knowledge - the issue doesn't look to be there.
> I mean, the msgs get properly routed to indexing master, which just cannot 
> handle locking contention.

Any thoughts on this, Sanne?

> 
> -Ales
> 
>> I believe this new backend is WIP in Hibernate Search.  Sanne, didn't you 
>> have a prototype in Infinispan's codebase though?
>> 
>> On 5 Apr 2013, at 15:28, Ales Justin  wrote:
>> 
 are you not using the JGroups backend anymore?
>>> 
>>> I'm using that "jgroups" backend, with auto-master election.
>>> 
 these Lock acquisitions are on the index lock, and make me suspect your 
 configuration is no longer applying the pattern we discussed a while back, 
 when you contributed the fixed to the JGroups indexing backend.
 
 Or is it the "Replication timeout for mstruk/capedwarf" which is causing 
 those locking errors?
>>> 
>>> No idea.
>>> 
>>> btw: didn't you say you had some new backend mechanism?
>>> Off Infinispan's channel.
>>> 
>>> -Ales
>>> 
 On 5 April 2013 14:56, Ales Justin  wrote:
 We're running a GAE HelloOrm2 example app on 3 nodes (3 laptops).
 
 Very soon after deploy, we get a never-ending stack of timeouts,
 which completely kills the app:
 * https://gist.github.com/alesj/5319414
 
 I then need to kill the AS in order to get it shutdown.
 
 How can this be tuned / fixed?
 
 -Ales
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> --
>> Manik Surtani
>> ma...@jboss.org
>> twitter.com/maniksurtani
>> 
>> Platform Architect, JBoss Data Grid
>> http://red.ht/data-grid
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Platform Architect, JBoss Data Grid
http://red.ht/data-grid

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Galder Zamarreño

On Apr 8, 2013, at 11:17 AM, Manik Surtani  wrote:

> All sounds very good. One important thing to consider is that the reference 
> to Metadata passed in by the client app will be tied to the ICE for the 
> entire lifespan of the ICE.  You'll need to think about a defensive copy or 
> some other form of making the Metadata immutable (by the user application, at 
> least) the moment it is passed in.

^ Excellent point, it could be a nightmare if users could change the metadata 
reference by the ICE at will. I'll have a think on how to best achieve this.

Cheers,

> 
> On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:
> 
>> Hi all,
>> 
>> As mentioned in 
>> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, in 
>> paralell to the switch to Equivalent* collections, I was also working on 
>> being able to pass metadata into Infinispan caches. This is done to better 
>> support the ability to store custom metadata in Infinispan without the need 
>> of extra wrappers. So, the idea is that InternalCacheEntry instances will 
>> have a a reference to this Metadata.
>> 
>> One of that metadata is version, which I've been using as test bed to see if 
>> clients could pass succesfully version information via metadata. As you 
>> already know, Hot Rod requires to store version information. Before, this 
>> was stored in a class called CacheValue alongside the value itself, but the 
>> work I've done in [1], this is passed via the new API I've added in [2].
>> 
>> So, I'd like to get some thoughts on this new API. I hope that with these 
>> new put/replace versions, we can get rid of the nightmare which is all the 
>> other put/replace calls taking lifespan and/or maxIdle information. In the 
>> end, I think there should be two basic puts:
>> 
>> - put(K, V)
>> - put(K, V, Metadata)
>> 
>> And their equivalents.
>> 
>> IMPORTANT NOTE 1: The implementation details are bound to change, because 
>> the entire Metadata needs to be stored in InternalCacheEntry, not just 
>> version, lifespan..etc. I'll further develop the implementation once I get 
>> into adding more metadata, i.e. when working on interoperability with REST. 
>> So, don't pay too much attention to the implementation itself, focus on the 
>> AdvancedCache API itself and let's refine that.
>> 
>> IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP, so 
>> please let's avoid discussing it in this email thread. Once I have a more 
>> final version I'll send an email about it.
>> 
>> Apart from working on enhancements to the API, I'm now carry on tackling the 
>> interoperability work with aim to have an initial version of the Embedded 
>> <-> Hot Rod interoperability as first step. Once that's in, it can be 
>> released to get early feedback while the rest of interoperability modes are 
>> developed.
>> 
>> Cheers,
>> 
>> [1] 
>> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d
>> [2] 
>> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d#L10R313
>> --
>> Galder Zamarreño
>> gal...@redhat.com
>> twitter.com/galderz
>> 
>> Project Lead, Escalante
>> http://escalante.io
>> 
>> Engineer, Infinispan
>> http://infinispan.org
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> --
> Manik Surtani
> ma...@jboss.org
> twitter.com/maniksurtani
> 
> Platform Architect, JBoss Data Grid
> http://red.ht/data-grid
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Ales Justin
This "jgroups" backend was there "long" ago.
And it was actually us - CD - that fixed it and made use of it.
It's no different from static JGroups backed, the only diff that this one 
elects master automatically.

I can change to Sanne's new Ispn based prototype if it will help.

But - with my limited cluster knowledge - the issue doesn't look to be there.
I mean, the msgs get properly routed to indexing master, which just cannot 
handle locking contention.

-Ales

> I believe this new backend is WIP in Hibernate Search.  Sanne, didn't you 
> have a prototype in Infinispan's codebase though?
> 
> On 5 Apr 2013, at 15:28, Ales Justin  wrote:
> 
>>> are you not using the JGroups backend anymore?
>> 
>> I'm using that "jgroups" backend, with auto-master election.
>> 
>>> these Lock acquisitions are on the index lock, and make me suspect your 
>>> configuration is no longer applying the pattern we discussed a while back, 
>>> when you contributed the fixed to the JGroups indexing backend.
>>> 
>>> Or is it the "Replication timeout for mstruk/capedwarf" which is causing 
>>> those locking errors?
>> 
>> No idea.
>> 
>> btw: didn't you say you had some new backend mechanism?
>> Off Infinispan's channel.
>> 
>> -Ales
>> 
>>> On 5 April 2013 14:56, Ales Justin  wrote:
>>> We're running a GAE HelloOrm2 example app on 3 nodes (3 laptops).
>>> 
>>> Very soon after deploy, we get a never-ending stack of timeouts,
>>> which completely kills the app:
>>> * https://gist.github.com/alesj/5319414
>>> 
>>> I then need to kill the AS in order to get it shutdown.
>>> 
>>> How can this be tuned / fixed?
>>> 
>>> -Ales
>>> 
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> 
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> --
> Manik Surtani
> ma...@jboss.org
> twitter.com/maniksurtani
> 
> Platform Architect, JBoss Data Grid
> http://red.ht/data-grid
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] query repl timeout

2013-04-08 Thread Manik Surtani
I believe this new backend is WIP in Hibernate Search.  Sanne, didn't you have 
a prototype in Infinispan's codebase though?

On 5 Apr 2013, at 15:28, Ales Justin  wrote:

>> are you not using the JGroups backend anymore?
> 
> I'm using that "jgroups" backend, with auto-master election.
> 
>> these Lock acquisitions are on the index lock, and make me suspect your 
>> configuration is no longer applying the pattern we discussed a while back, 
>> when you contributed the fixed to the JGroups indexing backend.
>> 
>> Or is it the "Replication timeout for mstruk/capedwarf" which is causing 
>> those locking errors?
> 
> No idea.
> 
> btw: didn't you say you had some new backend mechanism?
> Off Infinispan's channel.
> 
> -Ales
> 
>> On 5 April 2013 14:56, Ales Justin  wrote:
>> We're running a GAE HelloOrm2 example app on 3 nodes (3 laptops).
>> 
>> Very soon after deploy, we get a never-ending stack of timeouts,
>> which completely kills the app:
>> * https://gist.github.com/alesj/5319414
>> 
>> I then need to kill the AS in order to get it shutdown.
>> 
>> How can this be tuned / fixed?
>> 
>> -Ales
>> 
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Platform Architect, JBoss Data Grid
http://red.ht/data-grid

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Manik Surtani
All sounds very good. One important thing to consider is that the reference to 
Metadata passed in by the client app will be tied to the ICE for the entire 
lifespan of the ICE.  You'll need to think about a defensive copy or some other 
form of making the Metadata immutable (by the user application, at least) the 
moment it is passed in.

On 8 Apr 2013, at 09:24, Galder Zamarreño  wrote:

> Hi all,
> 
> As mentioned in 
> http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, in 
> paralell to the switch to Equivalent* collections, I was also working on 
> being able to pass metadata into Infinispan caches. This is done to better 
> support the ability to store custom metadata in Infinispan without the need 
> of extra wrappers. So, the idea is that InternalCacheEntry instances will 
> have a a reference to this Metadata.
> 
> One of that metadata is version, which I've been using as test bed to see if 
> clients could pass succesfully version information via metadata. As you 
> already know, Hot Rod requires to store version information. Before, this was 
> stored in a class called CacheValue alongside the value itself, but the work 
> I've done in [1], this is passed via the new API I've added in [2].
> 
> So, I'd like to get some thoughts on this new API. I hope that with these new 
> put/replace versions, we can get rid of the nightmare which is all the other 
> put/replace calls taking lifespan and/or maxIdle information. In the end, I 
> think there should be two basic puts:
> 
> - put(K, V)
> - put(K, V, Metadata)
> 
> And their equivalents.
> 
> IMPORTANT NOTE 1: The implementation details are bound to change, because the 
> entire Metadata needs to be stored in InternalCacheEntry, not just version, 
> lifespan..etc. I'll further develop the implementation once I get into adding 
> more metadata, i.e. when working on interoperability with REST. So, don't pay 
> too much attention to the implementation itself, focus on the AdvancedCache 
> API itself and let's refine that.
> 
> IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP, so 
> please let's avoid discussing it in this email thread. Once I have a more 
> final version I'll send an email about it.
> 
> Apart from working on enhancements to the API, I'm now carry on tackling the 
> interoperability work with aim to have an initial version of the Embedded <-> 
> Hot Rod interoperability as first step. Once that's in, it can be released to 
> get early feedback while the rest of interoperability modes are developed.
> 
> Cheers,
> 
> [1] 
> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d
> [2] 
> https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d#L10R313
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
> 
> Project Lead, Escalante
> http://escalante.io
> 
> Engineer, Infinispan
> http://infinispan.org
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Platform Architect, JBoss Data Grid
http://red.ht/data-grid


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] How to run the testsuite?

2013-04-08 Thread Manik Surtani

On 5 Apr 2013, at 17:01, Mircea Markus  wrote:

> I've upgrade to mvn 2.14

You mean Surefire 2.14. You had me confused for a bit, since I'm pretty sure we 
enforce mvn 3.x.  ;)

> and configured: forkCount=1/reuseForks=true (default settings)  "which means 
> that Surefire creates *one new JVM* process to execute all tests in one 
> *maven module*." [1]
> 
> It doesn't do that: it reuses the same process to execute all the tests in 
> *all* modules. 
> 
> [1] 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/fork-options-and-parallel-execution.html
> 
> On 20 Mar 2013, at 15:29, Adrian Nistor wrote:
> 
>> I've also tried changing the fork mode of surefire from 'none' to 'once' and 
>> the entire suite runs fine now on jvm 1.6 with 500mb MaxPermSize.  
>> Previously I did not complete, 500mb was not enough.
>> Anyone knows why surefire was not allowed to fork?
>> 
>> Haven't tried to analyze closely the heap yet but first thing I noticed is 
>> 15% of it is occupied by 19 ComponentMetadataRepo instances, which 
>> probably is not the root cause of this issue, but is odd anyway :).
>> 
>> On 03/20/2013 05:12 PM, Dan Berindei wrote:
>>> The problem is that we still leak threads in almost every module, and that 
>>> means we keep a copy of the core classes (and all their dependencies) for 
>>> every module. Of course, some modules' dependencies are already oversized, 
>>> so keeping only one copy is already too much...
>>> 
>>> I admit I don't run the whole test suite too often either, but I recently 
>>> changed the Cloudbees settings to get rid of the OOM there. It uses about 
>>> 550MB of permgen by the end of the test suite, without 
>>> -XX:+UseCompressedOops. These are the settings I used:
>>> 
>>> -server -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+UseParNewGC 
>>> -XX:+CMSClassUnloadingEnabled   -XX:NewRatio=4 -Xss500k -Xms100m -Xmx900m 
>>> -XX:MaxPermSize=700M
>>> 
>>> 
>>> Cheers
>>> Dan
>>> 
>>> 
>>> 
>>> On Wed, Mar 20, 2013 at 2:59 PM, Tristan Tarrant  
>>> wrote:
>>> Sanne, turn on CompressedOops ? Still those requirements are indeed
>>> ridiculous.
>>> 
>>> Tristan
>>> 
>>> On 03/20/2013 01:27 PM, Sanne Grinovero wrote:
 I'm testing master, at da5c3f0
 
 Just killed a run which was using
 
 java version "1.7.0_17"
 Java(TM) SE Runtime Environment (build 1.7.0_17-b02)
 Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)
 
 this time again an OOM (while I have 2GB !), last sign of life came
 from the "Rolling Upgrade Tooling"
 
 I'm not going to merge/review any pull request until this works.
 
 Sanne
 
 On 20 March 2013 12:09, Mircea Markus  wrote:
> I've just run it on master and didn't get OOM. well I'm using osx. Are 
> you running it on master or a particular branch? Which module crashes?
> e.g. pedro's ISPN-2808 adds quite some threads to the party - that's the 
> reason it hasn't been integrated yet.
> 
> On 20 Mar 2013, at 11:40, Sanne Grinovero wrote:
> 
>> Hi all,
>> after reviewing some pull requests, I'm since a couple of days unable
>> to run the testsuite; since Anna's fixes affect many modules I'm
>> trying to run the testsuite of the whole project, as we should always
>> do but I admit I haven't done it in a while because of the core module
>> failures.
>> 
>> So I run:
>> $ mvn -fn clean install
>> 
>> using -fn to have it continue after the core failures.
>> 
>> First attempt gave me an OOM, was running with 1G heap.. I'm pretty
>> sure this was good enough some months back.
>> 
>> Second attempt slowed down like crazy, and I found a warning about
>> having filled the code cache size, so doubled it to 200M.
>> 
>> Third attempt: OutOfMemoryError: PermGen space! But I'm running with
>> -XX:MaxPermSize=380M which should be plenty?
>> 
>> This is :
>> java version "1.6.0_43"
>> Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
>> Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
>> 
>> MAVEN_OPTS=-Xmx2G -XX:MaxPermSize=380M -XX:+TieredCompilation
>> -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
>> -XX:ReservedCodeCacheSize=200M
>> -Dlog4j.configuration=file:/opt/infinispan-log4j.xml
>> 
>> My custom log configuration just disables trace & debug.
>> 
>> Going to try now with larger PermGen and different JVMs but it looks
>> quite bad.. any other suggestion?
>> (I do have the security limits setup properly)
>> 
>> Sanne
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> Cheers,
> --
> Mircea Markus
> Infinispan lead (www.infinispan.org)
> 
> 
> 
> 
> 
> __

[infinispan-dev] AdvancedCache.put with Metadata parameter

2013-04-08 Thread Galder Zamarreño
Hi all,

As mentioned in 
http://lists.jboss.org/pipermail/infinispan-dev/2013-March/012348.html, in 
paralell to the switch to Equivalent* collections, I was also working on being 
able to pass metadata into Infinispan caches. This is done to better support 
the ability to store custom metadata in Infinispan without the need of extra 
wrappers. So, the idea is that InternalCacheEntry instances will have a a 
reference to this Metadata.

One of that metadata is version, which I've been using as test bed to see if 
clients could pass succesfully version information via metadata. As you already 
know, Hot Rod requires to store version information. Before, this was stored in 
a class called CacheValue alongside the value itself, but the work I've done in 
[1], this is passed via the new API I've added in [2].

So, I'd like to get some thoughts on this new API. I hope that with these new 
put/replace versions, we can get rid of the nightmare which is all the other 
put/replace calls taking lifespan and/or maxIdle information. In the end, I 
think there should be two basic puts:

- put(K, V)
- put(K, V, Metadata)

And their equivalents.

IMPORTANT NOTE 1: The implementation details are bound to change, because the 
entire Metadata needs to be stored in InternalCacheEntry, not just version, 
lifespan..etc. I'll further develop the implementation once I get into adding 
more metadata, i.e. when working on interoperability with REST. So, don't pay 
too much attention to the implementation itself, focus on the AdvancedCache API 
itself and let's refine that.

IMPORTANT NOTE 2: The interoperability work in commit in [1] is WIP, so please 
let's avoid discussing it in this email thread. Once I have a more final 
version I'll send an email about it.

Apart from working on enhancements to the API, I'm now carry on tackling the 
interoperability work with aim to have an initial version of the Embedded <-> 
Hot Rod interoperability as first step. Once that's in, it can be released to 
get early feedback while the rest of interoperability modes are developed.

Cheers,

[1] 
https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d
[2] 
https://github.com/galderz/infinispan/commit/a35956fe291d2b2dc3b7fa7bf44d8965ffb1a54d#L10R313
--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] would this work?

2013-04-08 Thread Galder Zamarreño

On Apr 4, 2013, at 2:49 PM, Mircea Markus  wrote:

> Hi,
> 
> Once we have the x-protocols data access in place, do you see any issues with 
> this approach:
> - hotrod java client + Avro writes an "Person" object
> - mechached C++ client reads the binary representation from the server and 
> deserialize it with Avro into a C++ "Person" object

^ Hmmm, any Avro work I did was based around basic types and collections of 
basic types. For custom types, Google Protocol buffers probably better which 
allows you to define structure of types via .idl files and generate the 
corresponding code for C/Java/Python?

Also, have not really looked at the feasibility of Avro marshaller for C/C++.

> - would whit work with REST as well?

^ As long as the client code (whatever language that is) calling REST can 
transform the payload, then yes.

Cheers,

> 
> Cheers,
> -- 
> Mircea Markus
> Infinispan lead (www.infinispan.org)
> 
> 
> 
> 


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev