James, I think it would help a little, but in my experience, hitting the DB and making the object from the result set is unfortunately about as expensive as deserializing it from disk, at least in Java! 
 
:-(
 
Even something like an XML serialization would have to be tuned carefully... I just discovered it takes 3-4 seconds just to serialize 150 of my "result" objects (very small) into XML. I got to look at JDOM or pre-build the tree some more, or something(?) to make it faster.  Anyway, I just read of a J2EE products for using shared memory between machines but those seem to work much better for some types of caching then for others.  I guess my requirements include the worst of both worlds: caching high volumes of objects of medium size that become stale quickly and need to be updated in the distributed environment. 
 
Here is a good article expert from the
 
 
I'll try to think about this stuff more in how it applies to our custom system.  Maybe the distributed shared cache is not the way to go after all, for performance improvement for 20,000 medium-size objects:
 
If you want the application to scale to handle a large number of simultaneous users, it will be important to minimize the use of shared memory that may be updated (Java objects instances that are read from/written to) so that client requests can run concurrently (without waiting for a synchronization lock). If your goal is for the highest per-user performance, then you will want to cache data to minimize the lookup time, although this can reduce scalability, as users have to wait for the synchronization lock for sharing the cache. A common alternative is to use a reader/writer lock that allows many requests to read the cache (all readers share one synchronization lock), at the expense of starving a writer who will rarely gain access to the lock during peak usage time. The reader/writer lock technique works well when there are more readers than writers. Although the reader/writer lock can be a middle ground between the opposing requirements, other tricks can be utilized to achieve both goals. If memory is in abundance, you can cache data in memory that is not shared between users. Each user will have a cache to reduce the expense of data lookup, at the cost of additional per-user memory consumption. This technique is known as "zero shared memory" optimization. The coding techniques that can be employed to achieve high performance are continuing to grow (see J2EE Patterns Repository ).

Greg

-----Original Message-----
From: James Stauffer [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 05, 2003 4:39 AM
To: jdjlist
Subject: [jdjlist] RE: Caching

Correct. The MultiCacheManager (as it is called) also allows the user to remove items from the cache of any of the machines.  For a distributed cache would it work to have a separate cache machine or a disk cache on a shared drive?

James Stauffer

---
You are currently subscribed to jdjlist as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]
http://www.sys-con.com/fusetalk

Reply via email to