A few brief comments, not specifically related to caching.  Having just
one box in your ST is dangerous.  You should try to have redundancy
whenever it is possible.  Also, I strongly recommend that you do not use
JBoss.  It will take you to class loader hell.  . .  It also encourages
you to write code that can only be run in the container. . . .  and  . .
Almost every component I've used from JBoss has been a disaster.  I can
crash the JBoss MQ at will with a bad client close, the transaction
manager in the last version was fatally flawed . . .  The cache is
fundamentally mis-designed. . . .  JGroups is an academic exercise that
is not usable except for in hobby size applications.   The protocol
stack api allows for mixing and mashing, but the project cannot cope
with the apparent flexibility in the api. .   I can go into more details
. . .  Just run in Tomcat.  You can use Spring as a transaction manager
. . .

Back to caching.  

The remote cache server is not setup to run in process, although we
could work on that.

You can do this:  Setup all the presentation tier (PT) caches to use the
lateral cache.  Configure them to issue a remove on put and to filter by
hashcode.  Configure the service tier (ST) to participate in the same
lateral cluster, but tell the service tier machine to not receive.  If
you add another service tier, then the ST machines also use an
additional lateral.  Have that lateral send and receive.  This will keep
the ST boxes charged and in sync.

The two cache.ccf files below would accomplish this.  I haven't tried to
do this, but it should work.  

The difficulty with any setup where you retrieve the data from the ST
and the ST is hooked into the cluster, is that the ST will update the
other PT caches.  PTn1 gets data from ST.  ST gets it from a database
and puts it in the cache.  This results in the item either (a) being
sent to all the PT boxes or (b) a remove message being sent if you have
the cache to issue a remove on put.  If (a) then you have sent data to
PTn1 twice.  PTn1 fetched it and the cache got updated.  That's somewhat
wasteful, but not tragic.  If (b) then STn1 will send a remove to all
the PT boxes.  PTn1 will get data from STn1, put it in the local cache,
and then STn1 will tell PTn1 to remove the item.  (There is a way out of
this dilemma, described below.)  You might think that if you want to
share data across tiers, then you should probably go with (a).  Option
(a) will result in all the data being pushed around.  (b) will result in
the data only being on the ST boxes.  If the data was only on the ST
boxes, then you might as well just cache on the ST boxes and avoid all
the needless removes. 

Thankfully there is an alternative to sending all the data all the time
and having the data in just one place.  You can setup filtering remove
by hashcode.  Basically you can take the setup in (b) -- you can have
the ST issue a remove on put.  However, it will also include the
hashcode of the item in the remove message.  If the item exists in the
client cache, i.e. on PTn1, the client will check to see if this
hashcode matches the hashcode of the local copy.  If so, the item will
not be removed.  This is the best alternative.  I setup the
configuration files below with this in mind.  The drawback is that two
objects may not be equal but they could have the same hashcode.  This is
more likely for very different objects.  Since you cache related objects
in a regions, the likelihood is slim.  


##############################################################
################## PRESENTATION TIER  cache.ccf ##############
# sets the default aux value for any non configured caches
jcs.default=DC,PTLTCP
jcs.default.cacheattributes=org.apache.jcs.engine.CompositeCacheAttribut
es
jcs.default.cacheattributes.MaxObjects=200001
jcs.default.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory
.lru.LRUMemoryCache
jcs.default.cacheattributes.UseMemoryShrinker=true
jcs.default.cacheattributes.MaxMemoryIdleTimeSeconds=3600
jcs.default.cacheattributes.ShrinkerIntervalSeconds=60
jcs.default.elementattributes=org.apache.jcs.engine.ElementAttributes
jcs.default.elementattributes.IsEternal=false
jcs.default.elementattributes.MaxLifeSeconds=700
jcs.default.elementattributes.IdleTime=1800
jcs.default.elementattributes.IsSpool=true
jcs.default.elementattributes.IsRemote=true
jcs.default.elementattributes.IsLateral=true

# DC
jcs.auxiliary.DC=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheF
actory
jcs.auxiliary.DC.attributes=org.apache.jcs.auxiliary.disk.indexed.Indexe
dDiskCacheAttributes
jcs.auxiliary.DC.attributes.DiskPath=target/test-sandbox/raf
jcs.auxiliary.DC.attributes.MaxPurgatorySize=10000000
jcs.auxiliary.DC.attributes.MaxKeySize=1000000
jcs.auxiliary.DC.attributes.MaxRecycleBinSize=5000
jcs.auxiliary.DC.attributes.OptimizeAtRemoveCount=300000
jcs.auxiliary.DC.attributes.ShutdownSpoolTimeLimit=60

# PTLTCP -- presentation tier lateral, removes on put
jcs.auxiliary.PTLTCP=org.apache.jcs.auxiliary.lateral.socket.tcp.Lateral
TCPCacheFactory
jcs.auxiliary.PTLTCP.attributes=org.apache.jcs.auxiliary.lateral.socket.
tcp.TCPLateralCacheAttributes
jcs.auxiliary.PTLTCP.attributes.TcpListenerPort=1118
jcs.auxiliary.PTLTCP.attributes.UdpDiscoveryAddr=228.5.6.8
jcs.auxiliary.PTLTCP.attributes.UdpDiscoveryPort=6666
jcs.auxiliary.PTLTCP.attributes.UdpDiscoveryEnabled=true
jcs.auxiliary.PTLTCP.attributes.Receive=true
jcs.auxiliary.PTLTCP.attributes.AllowGet=false
jcs.auxiliary.PTLTCP.attributes.IssueRemoveOnPut=true
jcs.auxiliary.PTLTCP.attributes.FilterRemoveByHashCode=true






##############################################################
################## SERVICE  TIER  cache.ccf ##################
# sets the default aux value for any non configured caches
jcs.default=DC,PTLTCP,STLTCP
jcs.default.cacheattributes=org.apache.jcs.engine.CompositeCacheAttribut
es
jcs.default.cacheattributes.MaxObjects=200001
jcs.default.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory
.lru.LRUMemoryCache
jcs.default.cacheattributes.UseMemoryShrinker=true
jcs.default.cacheattributes.MaxMemoryIdleTimeSeconds=3600
jcs.default.cacheattributes.ShrinkerIntervalSeconds=60
jcs.default.elementattributes=org.apache.jcs.engine.ElementAttributes
jcs.default.elementattributes.IsEternal=false
jcs.default.elementattributes.MaxLifeSeconds=700
jcs.default.elementattributes.IdleTime=1800
jcs.default.elementattributes.IsSpool=true
jcs.default.elementattributes.IsRemote=true
jcs.default.elementattributes.IsLateral=true

# DC
jcs.auxiliary.DC=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheF
actory
jcs.auxiliary.DC.attributes=org.apache.jcs.auxiliary.disk.indexed.Indexe
dDiskCacheAttributes
jcs.auxiliary.DC.attributes.DiskPath=target/test-sandbox/raf
jcs.auxiliary.DC.attributes.MaxPurgatorySize=10000000
jcs.auxiliary.DC.attributes.MaxKeySize=1000000
jcs.auxiliary.DC.attributes.MaxRecycleBinSize=5000
jcs.auxiliary.DC.attributes.OptimizeAtRemoveCount=300000
jcs.auxiliary.DC.attributes.ShutdownSpoolTimeLimit=60

# PTLTCP -- presentation tier lateral, removes on put, does not receive
from PT only sends to it
jcs.auxiliary.PTLTCP=org.apache.jcs.auxiliary.lateral.socket.tcp.Lateral
TCPCacheFactory
jcs.auxiliary.PTLTCP.attributes=org.apache.jcs.auxiliary.lateral.socket.
tcp.TCPLateralCacheAttributes
jcs.auxiliary.PTLTCP.attributes.TcpListenerPort=1118
jcs.auxiliary.PTLTCP.attributes.UdpDiscoveryAddr=228.5.6.8
jcs.auxiliary.PTLTCP.attributes.UdpDiscoveryPort=6666
jcs.auxiliary.PTLTCP.attributes.UdpDiscoveryEnabled=true
jcs.auxiliary.PTLTCP.attributes.Receive=false
jcs.auxiliary.PTLTCP.attributes.AllowGet=false
jcs.auxiliary.PTLTCP.attributes.IssueRemoveOnPut=true
jcs.auxiliary.PTLTCP.attributes.FilterRemoveByHashCode=true

# STLTCP -- this is for inter service tier communication
jcs.auxiliary.STLTCP=org.apache.jcs.auxiliary.lateral.socket.tcp.Lateral
TCPCacheFactory
jcs.auxiliary.STLTCP.attributes=org.apache.jcs.auxiliary.lateral.socket.
tcp.TCPLateralCacheAttributes
jcs.auxiliary.STLTCP.attributes.TcpListenerPort=1118
jcs.auxiliary.STLTCP.attributes.UdpDiscoveryAddr=228.5.6.8
jcs.auxiliary.STLTCP.attributes.UdpDiscoveryPort=7777
jcs.auxiliary.STLTCP.attributes.UdpDiscoveryEnabled=true
jcs.auxiliary.STLTCP.attributes.Receive=true
jcs.auxiliary.STLTCP.attributes.AllowGet=false
jcs.auxiliary.STLTCP.attributes.IssueRemoveOnPut=false
















> -----Original Message-----
> From: Niall Gallagher [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 2:30 PM
> To: JCS Users List
> Subject: RE: JCS as both local and remote cache?
> 
> Thanks for your detailed answer.
> 
> Regarding object compatibility, the value objects which we plan to
cache
> are developed in releases and change infrequently. Most development
work
> in the company affects how data is processed and routed - the
structure
> and fields of data objects themselves tend to remain quite static. We
> are breaking the rules of object oriented design by separating
behaviour
> from data, I know, but at least some code remains static this way.
> 
> Some background info: Right now, we have about 10 client machines. It
is
> not the number of client boxes that causes load for us though, it's
the
> volume of data they feed into our systems. We are a message routing
> company. Right now, load is extremely high on our databases and
routing
> applications, largely because we repeatedly perform expensive DB
> operations because we employ little caching in general. Where we
> currently employ caching, it is in an ad-hoc
per-application/per-server
> manner which means multiple applications can cache the same database
> data separately, get out of sync and need to be restarted individually
> when the underlying database data is changed. We are trying to move
> towards a more coordinated caching strategy.
> 
> Most of the data which we would like to cache is common data needed
> across all applications and servers - lookup data (changes
infrequently
> and is expensive to load) and account data (is just expensive to
load).
> 
> We are using JBoss application server to manage DB transactions. Since
> the cache layer will need to be kept in sync with DB as much as
> possible, I want changes to the cache to be tied to the success or
> failure of DB transactions. It makes sense for me to have application
> logic on JBoss machines decide when data should be cached and when it
> should be expired. If a DB stored procedure or application not linked
to
> the cache alters DB data, cache data can be updated centrally by
central
> applications.
> 
> The architecture for this project right now will be close to what you
> mention: PT*10 -> ST*1 -> DB*4.
> 
> We will likely expand the service tier (ST) layer by adding more
> machines as a cluster, but that's some time away - this project will
> serve as proof of concept for that.
> 
> We will need to install JBoss on every machine in the cluster, so why
> not have the application running in JBoss provide a data caching
> service? This seems like a clean solution to me because we won't need
to
> set up standalone remote caches in addition. I think performance
should
> be slightly better also, because whenever a JBoss machine wants to add
> something to the cache it will not need to make an RMI connection to
add
> it to the remote cache, because the remote cache is in-process. As you
> suggest, we could use lateral replication to expand ST cache capacity.
> This would fit neatly with JBoss clustering & distributed
transactions.
> 
> I want to cache at the presentation tier (PT) to avoid calling the ST
> over and over, yes. There will be regions defined in all caches (ST &
> PT) which hold company-wide shared data. On PT clients, these regions
> will be read-only (allowPut=false?). There will be some other regions
> configured in PT caches also, which will be used for data specific to
> each PT application. These regions will be read-write and act as
> standalone caches independent of the central server.
> 
> Is it possible to configure JCS to run in-process in an application
> server, as both a local cache to the app server and a remote cache to
> clients? Or are these functions mutually exclusive?
> 
> Mant thanks,
> 
> Niall
> 
> 
> 
> On Thu, 2006-02-23 at 11:53 -0600, Smuts, Aaron wrote:
> 
> > How many clients do you have?  How heavy is the load?
> >
> > I don't exactly recommend using JBoss or any other app server, but
that
> > is beside the point.  I'll just call your JBoss layer the middle
tier or
> > your service tier (ST).  Ideally, you could break this up into a
bunch
> > of independent services that would be responsible for some group of
data
> > . . . For now let's say that you have a service layer tier that
provides
> > data retrieval and storage service to your presentation layer.  If
your
> > presentation layer is just a web service layer for a client above
it, it
> > doesn't matter.  I'll just call it the presentation tier (PT) for
now.
> >
> > I suppose you have something like this:
> >
> > PT*10 --> ST*4 --> Database
> >
> > 10 presentation tier boxes that talk to 4 service tier boxes that
sit on
> > top of your database.
> >
> > If your data changes a lot, then one thing you can do it just cache
at
> > the service tier level. Or just cache the regions that change a lot
in
> > the service tier.
> >
> > Then you can just link the mid tier boxes together.  This will allow
you
> > to expand with cache replication for some time.  If you expect to
need
> > hundreds of mid tier boxes any time soon, then you will need to come
up
> > with some other strategy.
> >
> > There are two options here.  First, you can just hook up the lateral
> > cache between the ST boxes.  This is very simple, since with UDP
> > discovery, you don't have to do much at all to get them talking.
They
> > will all replicate their data to each other.  The second major
strategy
> > is to use a remote cache server.  You can hook the 4 ST boxes up to
a
> > centralized remote cache and share data that way.  You can configure
> > them to issue removes on put, so that only the cache that created
the
> > data and the remote server have the data.  Getting the remote cache
> > running properly is a bit tricky, but I'm trying to improve the
scripts
> > and documentation right now.
> >
> > I assume that you want to cache at the presentation tier to avoid
having
> > to call the service tier over and over for the same data.  I would
avoid
> > distributing cache data between tiers.  It seems unclean to separate
the
> > application and then share the data.  It would also make it more
> > difficult to release one tier without changing the other, since you
> > might make your objects incompatible.  (A good reason to decouple
tiers
> > and link them with just XML. . . )
> >
> > If you were to avoid the tier coupling, you could use either of the
two
> > strategies used on the ST on the PT.  That is, if you don't have too
> > many PT boxes and you are not pushing thousands of new items into
the
> > cache a second, then you could link them with the lateral cache.  If
you
> > are relatively low put, then you can scale to more boxes.   . . .
You
> > could also put a remote cache serer in place for your PT.
> >
> > The flow with the two tiered cache would be like this.
> >
> > PTn1 checks its local cache.  If it is not local or on disk, it
could
> > check the PT remote cache server.  If it doesn't have the item the
PTn1
> > calls STn1.  STn1 would go through the same procedure, except it
would
> > go to the database for the item.  If it got the item, it would then
> > update the ST remote cache, or just broadcast the item out to the
other
> > Service Tier members (STn1 --> STn2-4, this is done asynchronously).
> > PTn1 gets the data back from STn1.  PTn1 then puts the item in the
> > cache.  This will result in the item being sent to the PT remote
cache
> > server or being sent to the other PT boxes directly if you use the
> > lateral.
> >
> > If you want to share cached data between the tiers, then run a
remote
> > server in the middle.  Your firewall configuration will determine
where
> > this needs to go.  Make sure to define a serialverisionuid on all of
> > your objects. . . .
> >
> > If you run a remote server in the middle, then make sure that the PT
> > clients do not put into the cache, only get.  You don't want PTn1
> > putting into the remote cache data it got from STn1, since STn1 just
put
> > it in the remote cache.
> >
> > Cheers,
> >
> > Aaron Smuts
> >
> >
> >
> > > -----Original Message-----
> > > From: Niall Gallagher [mailto:[EMAIL PROTECTED]
> > > Sent: Thursday, February 23, 2006 12:15 PM
> > > To: [email protected]
> > > Subject: JCS as both local and remote cache?
> > >
> > > Hi,
> > >
> > > Can anybody answer the following for me?
> > >
> > > I have a JBoss server which I want to use as a central point of
access
> > > to a database. Client machines will only be able to write to the
> > > database by calling EJB method on the JBoss machine. Client
machines
> > > will not hold direct connections to the database themselves.
> > >
> > > I want the JBoss server to use JCS as a caching layer to sit
between
> > all
> > > clients and the database. JCS will run inside JBoss. Also, I want
> > client
> > > machines to run JCS locally as a local cache, configured to use
the
> > > JBoss central machine as a remote cache.
> > >
> > > Client machines will not write to the central cache directly. If
they
> > > call EJBs on the central JBoss machine to update the database, the
> > > central EJBs will update the central cache with the new data at
the
> > same
> > > time. If clients then try to access data from their local cache,
since
> > > it is not cached locally, the client-side JCS cache will download
it
> > > from the central (remote) cache automatically. If clients find
that
> > > required data is not available locally or centrally, they will
call
> > > methods on the JBoss machine to have it loaded into the cache.
> > >
> > > If the JBoss server updates or removes data in the central cache,
JCS
> > > will automatically issue 'remove' commands to all client caches.
> > >
> > > So I think this approach will ensure all caches remain in sync
almost
> > > all of the time (ignoring the intricacies of asynchronous
queueing!).
> > >
> > > My question is: How do I configure the central cache?
> > >
> > > If a line of code on the central server reads jcs.put("myKey",
> > > "myValue"); will the central JCS cache automatically issue remove
> > > commands to clients for "myKey"?
> > >
> > > The central server needs to be configured as a local cache for its
own
> > > use, but it should also know that it is a remote cache to clients,
and
> > > therefore needs to issue these remove commands.
> > >
> > > Any suggestions?
> > >
> > >
> > > By the way, I think JCS is an extremely well written piece of
> > software.
> > > Hopefully the recent additions to the JCS website will help it get
> > some
> > > more recognition for this, which I think it deserves.
> >
> >
---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to