Re: [infinispan-dev] JGroupsDistSync and ISPN-83

2011-05-17 Thread Vladimir Blagojevic
Apparently I did not understand semantics of RWL. I thought that writer 
can obtain a write lock even though read lock has not been released. But 
no, after all read locks have been released only then a writer can 
obtain a lock.



On 11-05-16 11:18 PM, Sanne Grinovero wrote:
 Same result here - where you expecting something different?

 Cheers,
 Sanne

 2011/5/16 Erik Salteresal...@bnivideo.com:
 EDIT:  Originally posted in the wrong thread.  I blame Outlook.

 I guess I qualify as others, since I'm looking at similar issues.

 Got read lock
 java.util.concurrent.TimeoutException: Thread-1 could not obtain exclusive 
 processing lock after 3 seconds.  Locks in question are 
 java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock@16aa37a6[Read 
 locks = 1] and 
 java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock@12b7eea[Unlocked]

 This is on the latest master.

 Erik

 -Original Message-
 From: infinispan-dev-boun...@lists.jboss.org 
 [mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Vladimir 
 Blagojevic
 Sent: Monday, May 16, 2011 4:25 PM
 To: infinispan -Dev List
 Cc: Manik Surtani
 Subject: Re: [infinispan-dev] JGroupsDistSync and ISPN-83

 Manik (and others),

 Can you run this code on your laptops and let me know what happened!

 Vladimir

 public static void main (String [] arg) throws Exception {
final JGroupsDistSync ds = new JGroupsDistSync();

ds.acquireProcessingLock(false, 3, TimeUnit.SECONDS);
System.out.println(Got read lock);

Thread t = new Thread(){
  public void run (){
 try {
  ds.acquireProcessingLock(true, 3, TimeUnit.SECONDS);
  System.out.println(Got write lock);
   } catch (TimeoutException e) {
  System.out.println(e);
   }
  }
};
t.start();
 }



 On 11-05-13 4:53 PM, Manik Surtani wrote:
 Yes, please have a look. If we are relying on lock upgrades then that's 
 really bad. I am aware of the inability to (safely) upgrade a RWL and I'm 
 pretty sure we don't try, but the dist sync codebase has evolved a lot and 
 could do with some careful analysis.

 Sent from my mobile phone

 On 12 May 2011, at 09:24, Vladimir Blagojevicvblag...@redhat.comwrote:

 On 11-05-11 11:23 AM, Dan Berindei wrote:
 If ReentrantReadWriteLock would allow upgrades then you would get a
 deadlock when two threads both hold the read lock and try to upgrade
 to a write lock at the same time.
 There's always a trade-off...

 I'm not familiar with the code, but are you sure the read lock is
 being held by the same thread that is trying to acquire the write
 lock?

 Not sure and it sounds counter intuitive that a thread holding a read
 lock from cluster invocation is doing state generation for state
 transfer as well. But this lock business is fishy and I plan to get
 to the bottom of it...

 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 The information contained in this message is legally privileged and 
 confidential, and is intended for the individual or entity to whom it is 
 addressed (or their designee). If this message is read by anyone other than 
 the intended recipient, please be advised that distribution of this message, 
 in any form, is strictly prohibited. If you have received this message in 
 error, please notify the sender immediately and delete or destroy all copies 
 of this message.

 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Local state transfer before going over network

2011-05-17 Thread Galder Zamarreño

On May 16, 2011, at 1:18 PM, Sanne Grinovero wrote:

 2011/5/16 Galder Zamarreño gal...@redhat.com:
 Not sure if the idea has come up but while at GeeCON last week I was 
 discussing to one of the attendees about state transfer improvements in 
 replicated environments:
 
 The idea is that in a replicated environment, if a cache manager shuts down, 
 it would dump its memory contents to a cache store (i.e. a local filesystem) 
 and when it starts up, instead of going over the network to do state 
 transfer, it would load the state from the local filesystem which would be 
 much quicker. Obviously, at times the cache manager would crash or have some 
 failure dumping the memory contents, so in that case it would fallback on 
 state transfer over the network. I think it's an interesting idea since it 
 could reduce the amount of state transfer to be done. It's true though that 
 there're other tricks if you're having issues with state transfer, such as 
 the use of a cluster cache loader.
 
 WDYT?
 
 Well if it's a shared cachestore, then we're using network at some
 level anyway. If we're talking about a not-shared cachestore, how do
 you know which keys/values are still valid and where not updated? and
 about the new keys?

I see this only being useful with a local cache store cos if you need to go 
remote over the network, might as well just do state transfer.

Not sure if the timestamp of creation/update is available per all entries (i'd 
need to check the code but maybe immortals do not store it...), but anyway 
assuming that a timestamp was stored in the local cache store, on startup the 
node could send this timestamp and the coordinator could send anything new 
created/updated after that timestamp.

This would be particularly efficient in situations where you have to quickly 
restart a machine for whatever reason and so the deltas are very small, or when 
the caches are big and state transfer would cost a lot from a bandwidth 
perspective.

 
 I like the concept though, let's explore more in this direction.
 
 Sanne
 
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Fluent configuration wiki update

2011-05-17 Thread Galder Zamarreño
Hmmm, I suppose you meant to answer a different thread? :)

On May 16, 2011, at 10:37 PM, Erik Salter wrote:

 I guess I qualify as others, since I'm looking at similar issues.
 
 Got read lock
 java.util.concurrent.TimeoutException: Thread-1 could not obtain exclusive 
 processing lock after 3 seconds.  Locks in question are 
 java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock@16aa37a6[Read 
 locks = 1] and 
 java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock@12b7eea[Unlocked]
 
 This is on the latest master.
 
 Erik
 
 -Original Message-
 From: infinispan-dev-boun...@lists.jboss.org 
 [mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Vladimir 
 Blagojevic
 Sent: Monday, May 16, 2011 4:27 PM
 To: infinispan -Dev List
 Cc: Galder Zamarreño
 Subject: Re: [infinispan-dev] Fluent configuration wiki update
 
 On 11-05-16 5:15 PM, Galder Zamarreño wrote:
 Feedback taken onboard: http://community.jboss.org/docs/DOC-14839
 
 On May 16, 2011, at 12:05 PM, Vladimir Blagojevic wrote:
 
 Excellent wiki Galder! Finally worth its title (Configuring cache
 programmatically)
 
 Cheers.
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 The information contained in this message is legally privileged and 
 confidential, and is intended for the individual or entity to whom it is 
 addressed (or their designee). If this message is read by anyone other than 
 the intended recipient, please be advised that distribution of this message, 
 in any form, is strictly prohibited. If you have received this message in 
 error, please notify the sender immediately and delete or destroy all copies 
 of this message.
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Per-invocation flag wiki

2011-05-17 Thread Galder Zamarreño

On May 16, 2011, at 7:47 PM, Sanne Grinovero wrote:

 2011/5/16 Emmanuel Bernard emman...@hibernate.org:
 Couldn't you have a higher level flag that says Flag.IGNORE_RETURN_VALUE so 
 that people wo a PhD can benefit form the feature?
 
 when I discovered that, that was my exact thought as well.
 
 then we started discussing about a proper interface to avoid return
 values.. should be typesafe (i.e. not have a return type at all), but
 then there are two main use cases: no return values, async return
 values. and the complexity of the API exploded, the thread was killed
 and we got a better idea. I hope we'll publish that soon.

Publish what?

 
 Sanne
 
 
 On 16 mai 2011, at 19:37, Sanne Grinovero wrote:
 
 good place to remind that if you don't want the return value of a
 write operation then you need to specify both flags:
 cache.withFlags(Flag.SKIP_REMOTE_LOOKUP, Flag.SKIP_CACHE_LOAD).put( .. )
 
 I guess that nobody knows that :)
 
 Sanne
 
 2011/5/16 Emmanuel Bernard emman...@hibernate.org:
 Yes I think something use case driven would make a nice portal.
 
 On 16 mai 2011, at 17:22, Galder Zamarreño wrote:
 
 
 On May 16, 2011, at 2:23 PM, Emmanuel Bernard wrote:
 
 Your description explains a use case / pattern but wo code showing how 
 to implement it properly.
 
 True and I think you have a point, though the use of putForExternalRead() 
 itself is something that should be documented either its javadoc or a 
 separate wiki.
 
 This wiki should be limited to explaining the actual flags.
 
 In this case what's the best way for me to verify that the new data has 
 indeed been pushed to the cache?
 put and then immediate get
 Put, wait, get
 Put all entries, then get all entries, and loop till all entries 
 supposedly put are indeed present.
 Same as above but with some kind of batch size instead of all the data 
 set?
 Or is there some kind of queue/log I can look for to get the reliable 
 list of failures?
 
 If you need immediate verification I would not use putForExternalRead() 
 but maybe a putAsync() with the flags you want which returns you a future 
 and allows you to verify the result in a less wacky way.
 
 The normal use case of PFER is:
 1. Check the cache whether an k/v is present
 2. If not present, go to db and call PFER with it.
 3. Use whatever you retrieved from db to do your job.
 ...
 N. Check the cache whether k/v is present
 N+1. Oh, it's present, so just use it instead of going to DB.
 
 This could be a good FAQ, wdyt?
 
 
 Emmanuel
 
 
 On 16 mai 2011, at 10:20, Galder Zamarreño gal...@redhat.com wrote:
 
 More wikis. I've just created http://community.jboss.org/docs/DOC-16803 
 which explains what Infinispan flags are, what they're used for...etc.
 
 Feedback appreciated
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Local state transfer before going over network

2011-05-17 Thread Sanne Grinovero
2011/5/17 Galder Zamarreño gal...@redhat.com:

 On May 16, 2011, at 1:18 PM, Sanne Grinovero wrote:

 2011/5/16 Galder Zamarreño gal...@redhat.com:
 Not sure if the idea has come up but while at GeeCON last week I was 
 discussing to one of the attendees about state transfer improvements in 
 replicated environments:

 The idea is that in a replicated environment, if a cache manager shuts 
 down, it would dump its memory contents to a cache store (i.e. a local 
 filesystem) and when it starts up, instead of going over the network to do 
 state transfer, it would load the state from the local filesystem which 
 would be much quicker. Obviously, at times the cache manager would crash or 
 have some failure dumping the memory contents, so in that case it would 
 fallback on state transfer over the network. I think it's an interesting 
 idea since it could reduce the amount of state transfer to be done. It's 
 true though that there're other tricks if you're having issues with state 
 transfer, such as the use of a cluster cache loader.

 WDYT?

 Well if it's a shared cachestore, then we're using network at some
 level anyway. If we're talking about a not-shared cachestore, how do
 you know which keys/values are still valid and where not updated? and
 about the new keys?

 I see this only being useful with a local cache store cos if you need to go 
 remote over the network, might as well just do state transfer.

+1

 Not sure if the timestamp of creation/update is available per all entries 
 (i'd need to check the code but maybe immortals do not store it...), but 
 anyway assuming that a timestamp was stored in the local cache store, on 
 startup the node could send this timestamp and the coordinator could send 
 anything new created/updated after that timestamp.

This means we'll need an API on the cache stores to return the
highest timestamp; some like the JDBC cacheloader could implement
that with a single query.

Not sure how you would handle cases about deleted entries; the other
nodes would need to keep a list of deleted keys with timestamps; maybe
there could be an option to never delete keys from a cacheloader, only
values - and record the timestamp of the operation.


 This would be particularly efficient in situations where you have to quickly 
 restart a machine for whatever reason and so the deltas are very small, or 
 when the caches are big and state transfer would cost a lot from a bandwidth 
 perspective.

super; this would be quite useful in the Lucene case, as I can
actually figure out which keys should be deleted inferring which ones
are obsolete from the common metadata (a known key contains this
information); and indeed startup time is a point I'd like to improve.



 I like the concept though, let's explore more in this direction.

 Sanne

 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Compilation errors in the infinispan-spring module

2011-05-17 Thread Olaf Bergner
Hi Dan,

no, I had no special reason to exclude rhq-pluginAnnotations beyond reducing 
infinispan-spring's dependencies to the absolute minimum. Consequently you may 
well remove that exclusion.

What's strange, though, is that I have no problems whatsoever building 
infinispan-spring using the current pom. I'm on Mac OS X, running one of the 
latest JDKs for that platform.

Cheers,
Olaf

 Original-Nachricht 
 Datum: Mon, 16 May 2011 19:23:17 +0300
 Von: Dan Berindei dan.berin...@gmail.com
 An: Olaf Bergner olaf.berg...@gmx.de
 CC: infinispan -Dev List infinispan-dev@lists.jboss.org
 Betreff: Compilation errors in the infinispan-spring module

 Hi Olaf,
 
 Did you see any problems with RHQ + Spring interaction that determined
 you to exclude the rhq-pluginAnnotations dependency in the spring
 module?
 
   dependency
  groupId${project.groupId}/groupId
  artifactIdinfinispan-core/artifactId
  version${project.version}/version
  scopecompile/scope
  exclusions
 exclusion
groupIdorg.rhq.helpers/groupId
artifactIdrhq-pluginAnnotations/artifactId
 /exclusion
  /exclusions
   /dependency
 
 
 I've been getting some weird errors while building the
 infinispan-spring module, both with OpenJDK 1.6.0_20 and with Sun JDK
 1.6.0_24 and 1.6.0_25, and they seem to appear because the compiler
 doesn't have access to the RHQ annotations:
 
 /tmp/privatebuild/home/dan/Work/infinispan/master/core/classes/org/infinispan/manager/DefaultCacheManager.class:
 warning: Cannot find annotation method 'displayName()' in type
 'org.rhq.helpers.pluginAnnotations.agent.Metric': class file for
 org.rhq.helpers.pluginAnnotations.agent.Metric not found
 /tmp/privatebuild/home/dan/Work/infinispan/master/core/classes/org/infinispan/manager/DefaultCacheManager.class:
 warning: Cannot find annotation method 'dataType()' in type
 'org.rhq.helpers.pluginAnnotations.agent.Metric'
 An exception has occurred in the compiler (1.6.0_24). Please file a
 bug at the Java Developer Connection
 (http://java.sun.com/webapps/bugreport)  after checking the Bug Parade
 for duplicates. Include your program and the following diagnostic in
 your report.  Thank you.
 com.sun.tools.javac.code.Symbol$CompletionFailure: class file for
 org.rhq.helpers.pluginAnnotations.agent.DataType not found
 
 Galder has seen it too with Sun JDK 1.6.0_24, but strangely enough
 everyone else is able to build without any errors.
 
 I'm thinking of removing the rhq-pluginAnnotations exclusion from the
 infinispan-spring pom.xml, the question is whether this would break
 something on the Spring side. Do you know of any potential problems,
 or did you do this just to reduce the number of dependencies brought
 in by infinispan-spring into an application?
 
 
 Cheers
 Dan

-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] clustered nodes not receiving TextMessage published on TOPIC in HornetQ

2011-05-17 Thread Pavana Kumari
Hi

I am running two nodes(say node1 and node2), which are in cluster. One 
node(node1) publishing Text Message on TOPIC. Node1 and node2 acts as 
subscribers on same TOPIC but node2 is unable to receive subscribed messages.

Regards
Pavana


DISCLAIMER: This email message and all attachments are confidential and may 
contain information that is privileged, confidential or exempt from disclosure 
under applicable law.  If you are not the intended recipient, you are notified 
that any dissemination, distribution or copying of this email is strictly 
prohibited. If you have received this email in error, please notify us 
immediately by return email or to mailad...@spanservices.com and destroy the 
original message.  Opinions, conclusions and other information in this message 
that do not relate to the official business of SPAN, shall be understood to be 
neither given nor endorsed by SPAN.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] clustered nodes not receiving TextMessage published on TOPIC in HornetQ

2011-05-17 Thread Manik Surtani
Have you posted to the wrong list, Pavana?

Sent from my mobile phone

On 17 May 2011, at 14:24, Pavana Kumari pavan...@spanservices.com wrote:

 Hi
 
  
 
 I am running two nodes(say node1 and node2), which are in cluster. One 
 node(node1) publishing Text Message on TOPIC. Node1 and node2 acts as 
 subscribers on same TOPIC but node2 is unable to receive subscribed messages.
 
  
 
 Regards
 
 Pavana
 
  
 
  
 
 DISCLAIMER: This email message and all attachments are confidential and may 
 contain information that is privileged, confidential or exempt from 
 disclosure under applicable law.  If you are not the intended recipient, you 
 are notified that any dissemination, distribution or copying of this email is 
 strictly prohibited. If you have received this email in error, please notify 
 us immediately by return email or to mailad...@spanservices.com and destroy 
 the original message.  Opinions, conclusions and other information in this 
 message that do not relate to the official business of SPAN, shall be 
 understood to be neither given nor endorsed by SPAN.
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Grouping API (ISPN-312) WAS: Generated keys affected by rehash Was: https://issues.jboss.org/browse/ISPN-977

2011-05-17 Thread Manik Surtani
Erik,

Dan is correct that playing with hash codes is not the correct solution.  
ISPN-312 is the correct approach.  Pete has been working on a first-cut of this 
and it should make 5.0.0.CR3.  (Understood that release candidates aren't the 
place to add new features, but we're adding it as a preview, just to get 
feedback on the API and impl.)

Have a look at the proposed API on https://issues.jboss.org/browse/ISPN-312 and 
let us know if it works for you.

Cheers
Manik
  
On 13 May 2011, at 18:28, Erik Salter wrote:

 Hi Dan,
 
 I don't necessarily care about which server it's on, as long as the keys for 
 my set of caches all remain collocated.  I understand they will all end up in 
 the same bucket, but for one hash code, that's at most 200 keys.  I must 
 acquire a lock for a subset of them during a transaction -- so I make liberal 
 use of the eagerLockSingleNode option and redirecting my calling 
 application to execute a transaction on the local node.  Acquiring 
 cluster-wide locks is an absolute throughput killer.
 
 I took a look at the KeyAffinityService a while ago (when it came out) and 
 quickly realized it would not be suitable for my purposes.  I was wondering 
 if ISPN-977 would allow me to use it.  But you're right.  What I ultimately 
 want is ISPN-312.
 
 Erik
 
 -Original Message-
 From: infinispan-dev-boun...@lists.jboss.org 
 [mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Dan Berindei
 Sent: Friday, May 13, 2011 12:58 PM
 To: infinispan -Dev List
 Subject: Re: [infinispan-dev] Generated keys affected by rehash Was: 
 https://issues.jboss.org/browse/ISPN-977
 
 On Fri, May 13, 2011 at 6:38 PM, Erik Salter esal...@bnivideo.com wrote:
 Yes, collocation of all keys is a large concern of my application(s).
 
 Currently, I can handle keys I'm in control of (like database-generated 
 keys), where I can play around with the hash code.   What I would love to do 
 is collocate that data with keys I can't control (like UUIDs) so that all 
 cache operations can be done in the same transaction on the data owner's 
 node.
 
 
 By playing around with the hash code do you mean you set the hashcode for all 
 the keys you want on a certain server to the same value? I imagine that would 
 degrade performance quite a bit, because we have HashMaps everywhere and your 
 keys will always end up in the same hash bucket.
 
 
 ISPN-312 seems to me like a much better fit for your use case than the 
 KeyAffinityService. Even if you added a listener to change your keys when the 
 topology changes, that would mean on a rehash the keys could get moved to the 
 new server and then back to the old server, whereas with ISPN-312 they will 
 either all stay on the old node or they will all move to the new node.
 
 Cheers
 Dan
 
 
 Erik
 
 -Original Message-
 From: infinispan-dev-boun...@lists.jboss.org
 [mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Manik
 Surtani
 Sent: Friday, May 13, 2011 5:25 AM
 To: infinispan -Dev List
 Subject: [infinispan-dev] Generated keys affected by rehash Was:
 https://issues.jboss.org/browse/ISPN-977
 
 
 On 11 May 2011, at 18:47, Erik Salter wrote:
 
 Wouldn't any rehash affect the locality of these generated keys, or am I 
 missing something?
 
 It would.  And hence ISPN-977, to address that.  Or is your concern keys 
 already generated before the rehash?  The latter would certainly be a 
 problem.  Perhaps, if this was important to the application, on detecting a 
 change in topology, re-generate keys and move data around?  For other apps, 
 move the session to the appropriate node?
 
 Cheers
 Manik
 --
 Manik Surtani
 ma...@jboss.org
 twitter.com/maniksurtani
 
 Lead, Infinispan
 http://www.infinispan.org
 
 
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 The information contained in this message is legally privileged and 
 confidential, and is intended for the individual or entity to whom it is 
 addressed (or their designee). If this message is read by anyone other than 
 the intended recipient, please be advised that distribution of this message, 
 in any form, is strictly prohibited. If you have received this message in 
 error, please notify the sender immediately and delete or destroy all copies 
 of this message.
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 The information contained in this message is legally privileged and 
 confidential, and is intended for the individual or entity to whom it is 
 addressed (or their designee). If this message is read by anyone other than 
 the intended recipient, please be 

Re: [infinispan-dev] Local state transfer before going over network

2011-05-17 Thread Manik Surtani
Interesting discussions.  Another approach may be to version data using lamport 
clocks or vector clocks.  Then at the start of a rehash, a digest of keys and 
versions can be pushed, and the receiver 'decides' which keys are out of date 
and needs to be pulled from across the network.

On 17 May 2011, at 12:06, Sanne Grinovero wrote:

 2011/5/17 Galder Zamarreño gal...@redhat.com:
 
 On May 16, 2011, at 1:18 PM, Sanne Grinovero wrote:
 
 2011/5/16 Galder Zamarreño gal...@redhat.com:
 Not sure if the idea has come up but while at GeeCON last week I was 
 discussing to one of the attendees about state transfer improvements in 
 replicated environments:
 
 The idea is that in a replicated environment, if a cache manager shuts 
 down, it would dump its memory contents to a cache store (i.e. a local 
 filesystem) and when it starts up, instead of going over the network to do 
 state transfer, it would load the state from the local filesystem which 
 would be much quicker. Obviously, at times the cache manager would crash 
 or have some failure dumping the memory contents, so in that case it would 
 fallback on state transfer over the network. I think it's an interesting 
 idea since it could reduce the amount of state transfer to be done. It's 
 true though that there're other tricks if you're having issues with state 
 transfer, such as the use of a cluster cache loader.
 
 WDYT?
 
 Well if it's a shared cachestore, then we're using network at some
 level anyway. If we're talking about a not-shared cachestore, how do
 you know which keys/values are still valid and where not updated? and
 about the new keys?
 
 I see this only being useful with a local cache store cos if you need to go 
 remote over the network, might as well just do state transfer.
 
 +1
 
 Not sure if the timestamp of creation/update is available per all entries 
 (i'd need to check the code but maybe immortals do not store it...), but 
 anyway assuming that a timestamp was stored in the local cache store, on 
 startup the node could send this timestamp and the coordinator could send 
 anything new created/updated after that timestamp.
 
 This means we'll need an API on the cache stores to return the
 highest timestamp; some like the JDBC cacheloader could implement
 that with a single query.
 
 Not sure how you would handle cases about deleted entries; the other
 nodes would need to keep a list of deleted keys with timestamps; maybe
 there could be an option to never delete keys from a cacheloader, only
 values - and record the timestamp of the operation.
 
 
 This would be particularly efficient in situations where you have to quickly 
 restart a machine for whatever reason and so the deltas are very small, or 
 when the caches are big and state transfer would cost a lot from a bandwidth 
 perspective.
 
 super; this would be quite useful in the Lucene case, as I can
 actually figure out which keys should be deleted inferring which ones
 are obsolete from the common metadata (a known key contains this
 information); and indeed startup time is a point I'd like to improve.
 
 
 
 I like the concept though, let's explore more in this direction.
 
 Sanne
 
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Per-invocation flag wiki

2011-05-17 Thread Manik Surtani
+1!


On 16 May 2011, at 18:37, Sanne Grinovero wrote:

 good place to remind that if you don't want the return value of a
 write operation then you need to specify both flags:
 cache.withFlags(Flag.SKIP_REMOTE_LOOKUP, Flag.SKIP_CACHE_LOAD).put( .. )
 
 I guess that nobody knows that :)
 
 Sanne
 
 2011/5/16 Emmanuel Bernard emman...@hibernate.org:
 Yes I think something use case driven would make a nice portal.
 
 On 16 mai 2011, at 17:22, Galder Zamarreño wrote:
 
 
 On May 16, 2011, at 2:23 PM, Emmanuel Bernard wrote:
 
 Your description explains a use case / pattern but wo code showing how to 
 implement it properly.
 
 True and I think you have a point, though the use of putForExternalRead() 
 itself is something that should be documented either its javadoc or a 
 separate wiki.
 
 This wiki should be limited to explaining the actual flags.
 
 In this case what's the best way for me to verify that the new data has 
 indeed been pushed to the cache?
 put and then immediate get
 Put, wait, get
 Put all entries, then get all entries, and loop till all entries 
 supposedly put are indeed present.
 Same as above but with some kind of batch size instead of all the data set?
 Or is there some kind of queue/log I can look for to get the reliable list 
 of failures?
 
 If you need immediate verification I would not use putForExternalRead() but 
 maybe a putAsync() with the flags you want which returns you a future and 
 allows you to verify the result in a less wacky way.
 
 The normal use case of PFER is:
 1. Check the cache whether an k/v is present
 2. If not present, go to db and call PFER with it.
 3. Use whatever you retrieved from db to do your job.
 ...
 N. Check the cache whether k/v is present
 N+1. Oh, it's present, so just use it instead of going to DB.
 
 This could be a good FAQ, wdyt?
 
 
 Emmanuel
 
 
 On 16 mai 2011, at 10:20, Galder Zamarreño gal...@redhat.com wrote:
 
 More wikis. I've just created http://community.jboss.org/docs/DOC-16803 
 which explains what Infinispan flags are, what they're used for...etc.
 
 Feedback appreciated
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Grouping API (ISPN-312) WAS: Generated keys affected by rehash Was: https://issues.jboss.org/browse/ISPN-977

2011-05-17 Thread Erik Salter
Hi Manik,

I think we are in agreement that playing with hash codes was only a temporary 
measure.  In my case, having  200 entries with the same hash code was worth it 
for knowing that I could handle transactions locally and reap the benefits of 
increased throughput.   So I can now replace the hash code with @Group.  Cool.

The group generator interface looks interesting, since it closest reflects my 
situation.  I now have requirements where an immutable key class will need to 
be saved within the same transaction as the scenario above (obviously, hashing 
to the same node is a plus)

One thing isn't clear from the JIRA.  If I wanted to get Employee SteveVai 
from the cache, do I need to know the group context is com.ibanez.SteveVai?  
My calling application only knows the key value, not the value with the key 
context.

Erik

From: infinispan-dev-boun...@lists.jboss.org 
[mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Manik Surtani
Sent: Tuesday, May 17, 2011 1:34 PM
To: infinispan -Dev List
Subject: [infinispan-dev] Grouping API (ISPN-312) WAS: Generated keys affected 
by rehash Was: https://issues.jboss.org/browse/ISPN-977

Erik,

Dan is correct that playing with hash codes is not the correct solution.  
ISPN-312 is the correct approach.  Pete has been working on a first-cut of this 
and it should make 5.0.0.CR3.  (Understood that release candidates aren't the 
place to add new features, but we're adding it as a preview, just to get 
feedback on the API and impl.)

Have a look at the proposed API on https://issues.jboss.org/browse/ISPN-312 and 
let us know if it works for you.

Cheers
Manik

On 13 May 2011, at 18:28, Erik Salter wrote:


Hi Dan,

I don't necessarily care about which server it's on, as long as the keys for my 
set of caches all remain collocated.  I understand they will all end up in the 
same bucket, but for one hash code, that's at most 200 keys.  I must acquire a 
lock for a subset of them during a transaction -- so I make liberal use of the 
eagerLockSingleNode option and redirecting my calling application to execute 
a transaction on the local node.  Acquiring cluster-wide locks is an absolute 
throughput killer.

I took a look at the KeyAffinityService a while ago (when it came out) and 
quickly realized it would not be suitable for my purposes.  I was wondering if 
ISPN-977 would allow me to use it.  But you're right.  What I ultimately want 
is ISPN-312.

Erik

-Original Message-
From: 
infinispan-dev-boun...@lists.jboss.orgmailto:infinispan-dev-boun...@lists.jboss.org
 [mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Dan Berindei
Sent: Friday, May 13, 2011 12:58 PM
To: infinispan -Dev List
Subject: Re: [infinispan-dev] Generated keys affected by rehash Was: 
https://issues.jboss.org/browse/ISPN-977

On Fri, May 13, 2011 at 6:38 PM, Erik Salter 
esal...@bnivideo.commailto:esal...@bnivideo.com wrote:

Yes, collocation of all keys is a large concern of my application(s).

Currently, I can handle keys I'm in control of (like database-generated keys), 
where I can play around with the hash code.   What I would love to do is 
collocate that data with keys I can't control (like UUIDs) so that all cache 
operations can be done in the same transaction on the data owner's node.


By playing around with the hash code do you mean you set the hashcode for all 
the keys you want on a certain server to the same value? I imagine that would 
degrade performance quite a bit, because we have HashMaps everywhere and your 
keys will always end up in the same hash bucket.


ISPN-312 seems to me like a much better fit for your use case than the 
KeyAffinityService. Even if you added a listener to change your keys when the 
topology changes, that would mean on a rehash the keys could get moved to the 
new server and then back to the old server, whereas with ISPN-312 they will 
either all stay on the old node or they will all move to the new node.

Cheers
Dan



Erik

-Original Message-
From: 
infinispan-dev-boun...@lists.jboss.orgmailto:infinispan-dev-boun...@lists.jboss.org
[mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Manik
Surtani
Sent: Friday, May 13, 2011 5:25 AM
To: infinispan -Dev List
Subject: [infinispan-dev] Generated keys affected by rehash Was:
https://issues.jboss.org/browse/ISPN-977


On 11 May 2011, at 18:47, Erik Salter wrote:

Wouldn't any rehash affect the locality of these generated keys, or am I 
missing something?

It would.  And hence ISPN-977, to address that.  Or is your concern keys 
already generated before the rehash?  The latter would certainly be a problem.  
Perhaps, if this was important to the application, on detecting a change in 
topology, re-generate keys and move data around?  For other apps, move the 
session to the appropriate node?

Cheers
Manik
--
Manik Surtani
ma...@jboss.orgmailto:ma...@jboss.org
twitter.com/maniksurtanihttp://twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org





Re: [infinispan-dev] Local state transfer before going over network

2011-05-17 Thread Bela Ban
This is  exactly what JGroups digests do

On 05/17/2011 10:38 AM, Manik Surtani wrote:
 Interesting discussions.  Another approach may be to version data using 
 lamport clocks or vector clocks.  Then at the start of a rehash, a digest of 
 keys and versions can be pushed, and the receiver 'decides' which keys are 
 out of date and needs to be pulled from across the network.

 On 17 May 2011, at 12:06, Sanne Grinovero wrote:

 2011/5/17 Galder Zamarreñogal...@redhat.com:

 On May 16, 2011, at 1:18 PM, Sanne Grinovero wrote:

 2011/5/16 Galder Zamarreñogal...@redhat.com:
 Not sure if the idea has come up but while at GeeCON last week I was 
 discussing to one of the attendees about state transfer improvements in 
 replicated environments:

 The idea is that in a replicated environment, if a cache manager shuts 
 down, it would dump its memory contents to a cache store (i.e. a local 
 filesystem) and when it starts up, instead of going over the network to 
 do state transfer, it would load the state from the local filesystem 
 which would be much quicker. Obviously, at times the cache manager would 
 crash or have some failure dumping the memory contents, so in that case 
 it would fallback on state transfer over the network. I think it's an 
 interesting idea since it could reduce the amount of state transfer to be 
 done. It's true though that there're other tricks if you're having issues 
 with state transfer, such as the use of a cluster cache loader.

 WDYT?

 Well if it's a shared cachestore, then we're using network at some
 level anyway. If we're talking about a not-shared cachestore, how do
 you know which keys/values are still valid and where not updated? and
 about the new keys?

 I see this only being useful with a local cache store cos if you need to go 
 remote over the network, might as well just do state transfer.

 +1

 Not sure if the timestamp of creation/update is available per all entries 
 (i'd need to check the code but maybe immortals do not store it...), but 
 anyway assuming that a timestamp was stored in the local cache store, on 
 startup the node could send this timestamp and the coordinator could send 
 anything new created/updated after that timestamp.

 This means we'll need an API on the cache stores to return the
 highest timestamp; some like the JDBC cacheloader could implement
 that with a single query.

 Not sure how you would handle cases about deleted entries; the other
 nodes would need to keep a list of deleted keys with timestamps; maybe
 there could be an option to never delete keys from a cacheloader, only
 values - and record the timestamp of the operation.


 This would be particularly efficient in situations where you have to 
 quickly restart a machine for whatever reason and so the deltas are very 
 small, or when the caches are big and state transfer would cost a lot from 
 a bandwidth perspective.

 super; this would be quite useful in the Lucene case, as I can
 actually figure out which keys should be deleted inferring which ones
 are obsolete from the common metadata (a known key contains this
 information); and indeed startup time is a point I'd like to improve.



 I like the concept though, let's explore more in this direction.

 Sanne

 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Manik Surtani
 ma...@jboss.org
 twitter.com/maniksurtani

 Lead, Infinispan
 http://www.infinispan.org




 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

-- 
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev