Yes. If you read the posts from a week or two ago on the JCS users list there was a gentleman who was getting the wrong objects returned from gets when one of his lateral caches was restarted.
-Travis Savo -----Original Message----- From: Aaron Smuts [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 14, 2004 4:10 PM To: 'Turbine JCS Developers List' Subject: RE: outstanding issues? I didn't think that the problem was with the get method. Was someone having another problem with the lateral? Aaron > -----Original Message----- > From: Aaron Smuts [mailto:[EMAIL PROTECTED] > Sent: Wednesday, April 14, 2004 6:09 PM > To: 'Turbine JCS Developers List' > Subject: RE: outstanding issues? > > if (socket.getInputStream().available() > 0) { > socket.getInputStream().read(new > byte[socket.getInputStream().available()]); > } > > I noticed this addition before writing in the SendAndReceive method of > the LateralTCPSender class. Is this all you added to the lateral? > > This method is only used by get requests to other laterals. > > There must be something I overlooked. > > Aaron > > Is this all you added? > > -----Original Message----- > > From: Travis Savo [mailto:[EMAIL PROTECTED] > > Sent: Wednesday, April 14, 2004 2:20 PM > > To: 'Turbine JCS Developers List' > > Subject: RE: outstanding issues? > > > > Definitely check out the EHCache tests. They have several tests which > > expose > > the flaws. > > > > The disk locking problem happens only under very heavy load. The > > implementation taken from EHCache doesn't exhibit the same problems, > only > > requires one file per region, and can be easily and reliably be made > > persistable across restarts. Is there something wrong with using it? > > > > As covered in my prior posts with patches, the problem with lateral > > distribution is the LateralTCPService can get a socket stream with > dirty > > data from prior requests, leading to mismatched objects returned on > > lateral > > get. My patch demonstrates an obvious place to flush the buffer. > > > > The memory leak in memory cache happens when the linked list becomes > > inconsistent with the map. EHCache tests to expose this suggest the > > problem > > is with removeAll, but I've seen it with just get/put/remove. The LRU > > memory > > store implementation from EHCache uses an java.util.LinkedHashMap, or > if > > it's not available, a org.apache.commons.collections.map.LRUMap, > thereby > > dramatically reducing the complexity of the code, and eliminating > > opportunities for the linked list and map to get out of sync. > > > > It's also 79% faster for the 'insert 5 million typical CacheElements, > then > > get each of them by key' test, and 293% faster for the 'insert 5 > million > > typical CacheElements and get each of them by key, then remove them > one at > > a > > time by key' (stastics borrowed from > > > http://ehcache.sourceforge.net/documentation/index.html#physicalstores). > > Again, is there something wrong with using it? Why go to all the work > of > > fixing a complex design when a simpler, faster one exists, and is > working > > without problems? > > > > The other issues I'm aware of is with RemoteCache, where removes will > > generate exponential remove requests. I'm working on a patch for it > from > > my > > code base, as well as the change from byte to integer for cache ID's, > > which > > will cause problems with 2 or more remote caches under flakey network > > conditions. > > > > -Travis Savo > > > > > > > > -----Original Message----- > > From: Aaron Smuts [mailto:[EMAIL PROTECTED] > > Sent: Wednesday, April 14, 2004 11:41 AM > > To: 'Turbine JCS Developers List' > > Subject: outstanding issues? > > > > > > I want to knock out any remaining issues. > > > > Can someone let me know if they can get the disk cache to lock. I > can't > > make it happen. > > > > Can someone explain the purported problem with the lateral > distribution. > > > > Was the "memory leak" in the memory cache just the problem with > expired > > elements not getting removed. If so, it is solve. If not, can > someone > > explain it to me. > > > > If you are aware of any other issues, please let me know and I'll work > > on them. > > > > Thanks, > > > > Aaron > > > > > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: [EMAIL PROTECTED] > > For additional commands, e-mail: > [EMAIL PROTECTED] > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: [EMAIL PROTECTED] > > For additional commands, e-mail: > [EMAIL PROTECTED] > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
