Author: asmuts
Date: Fri Jul 14 10:44:36 2006
New Revision: 421961
URL: http://svn.apache.org/viewvc?rev=421961&view=rev
Log:
adding an xdoc describing two new mysql disk cache configuration options
Added:
jakarta/jcs/trunk/xdocs/MySQLDiskCacheProperties.xml
Modified:
jakarta/jcs/trunk/xdocs/RemoteAuxCache.xml
jakarta/jcs/trunk/xdocs/navigation.xml
Added: jakarta/jcs/trunk/xdocs/MySQLDiskCacheProperties.xml
URL:
http://svn.apache.org/viewvc/jakarta/jcs/trunk/xdocs/MySQLDiskCacheProperties.xml?rev=421961&view=auto
==============================================================================
--- jakarta/jcs/trunk/xdocs/MySQLDiskCacheProperties.xml (added)
+++ jakarta/jcs/trunk/xdocs/MySQLDiskCacheProperties.xml Fri Jul 14 10:44:36
2006
@@ -0,0 +1,177 @@
+<?xml version="1.0"?>
+
+<document>
+ <properties>
+ <title>MySQL Disk Cache Configuration</title>
+ <author email="[EMAIL PROTECTED]">Aaron Smuts</author>
+ </properties>
+
+ <body>
+ <section name="MySQL Disk Auxiliary Cache Configuration">
+
+ <p>
+ The MySQL Disk Cache uses all of the JDBC Disk
Cache
+ properties. It adds a few of its own. The
following
+ properties on apply to the MySQL Disk Cache
plugin.
+ </p>
+
+ <subsection name="MySQL Disk Configuration Properties">
+ <table>
+ <tr>
+ <th>Property</th>
+ <th>Description</th>
+ <th>Required</th>
+ <th>Default Value</th>
+ </tr>
+ <tr>
+ <td>optimizationSchedule</td>
+ <td>
+ For now this is a
simple comma delimited
+ list of HH:MM:SS times
to optimize the
+ table. If none is
supplied, then no
+ optimizations will be
performed.
+
+ In the future we can
add a chron like
+ scheduling system. This
was created to meet
+ a pressing need to
optimize fragmented
+ MyISAM tables. When the
table becomes
+ fragmented, it starts
to take a long time to
+ run the shrinker that
deletes expired
+ elements.
+
+ Setting the value to
"03:01,15:00" will
+ cause the optimizer to
run at 3 am and at 3
+ pm.
+ </td>
+ <td>N</td>
+ <td>null</td>
+ </tr>
+
+ <tr>
+ <td>balkDuringOptimization</td>
+ <td>
+ If this is true, then
when JCS is optimizing
+ the table it will
return null from get
+ requests and do nothing
for put requests.
+
+ If you are using the
remote cache and have a
+ failover server
configured in a remote cache
+ cluster, and you allow
clustered gets, the
+ primary server will act
as a proxy to the
+ failover. This way,
optimization should have
+ no impact for clients
of the remote cache.
+ </td>
+ <td>N</td>
+ <td>true</td>
+ </tr>
+
+ </table>
+ </subsection>
+
+ <subsection name="Example Configuration">
+ <source>
+ <![CDATA[
+##############################################################
+################## AUXILIARY CACHES AVAILABLE ################
+# MYSQL disk cache
+jcs.auxiliary.MYSQL=org.apache.jcs.auxiliary.disk.jdbc.mysql.MySQLDiskCacheFactory
+jcs.auxiliary.MYSQL.attributes=org.apache.jcs.auxiliary.disk.jdbc.mysql.MySQLDiskCacheAttributes
+jcs.auxiliary.MYSQL.attributes.userName=sa
+jcs.auxiliary.MYSQL.attributes.password=
+jcs.auxiliary.MYSQL.attributes.url=jdbc:hsqldb:target/cache_hsql_db
+jcs.auxiliary.MYSQL.attributes.driverClassName=org.hsqldb.jdbcDriver
+jcs.auxiliary.MYSQL.attributes.tableName=JCS_STORE_MYSQL
+jcs.auxiliary.MYSQL.attributes.testBeforeInsert=false
+jcs.auxiliary.MYSQL.attributes.maxActive=15
+jcs.auxiliary.MYSQL.attributes.allowRemoveAll=true
+jcs.auxiliary.MYSQL.attributes.MaxPurgatorySize=10000000
+jcs.auxiliary.MYSQL.attributes.optimizationSchedule=12:34:56,02:34:54
+jcs.auxiliary.MYSQL.attributes.balkDuringOptimization=true
+ ]]>
+ </source>
+ </subsection>
+
+ <subsection name="MySQL Disk Event Queue Configuration">
+
+ <table>
+ <tr>
+ <th>Property</th>
+ <th>Description</th>
+ <th>Required</th>
+ <th>Default Value</th>
+ </tr>
+ <tr>
+ <td>EventQueueType</td>
+ <td>
+ This should be either
SINGLE or POOLED. By
+ default the single
style pool is used. The
+ single style pool uses
a single thread per
+ event queue. That
thread is killed whenever
+ the queue is inactive
for 30 seconds. Since
+ the disk cache uses an
event queue for every
+ region, if you have
many regions and they
+ are all active, you
will be using many
+ threads. To limit the
number of threads, you
+ can configure the disk
cache to use the
+ pooled event queue.
Using more threads than
+ regions will not add
any benefit for the
+ indexed disk cache,
since only one thread
+ can read or write at a
time for a single
+ region.
+ </td>
+ <td>N</td>
+ <td>SINGLE</td>
+ </tr>
+ <tr>
+ <td>EventQueuePoolName</td>
+ <td>
+ This is the name of the
pool to use. It is
+ required if you choose
the POOLED event
+ queue type, otherwise
it is ignored.
+ </td>
+ <td>Y</td>
+ <td>n/a</td>
+ </tr>
+ </table>
+ </subsection>
+
+ <subsection
+ name="Example Configuration Using Thread Pool">
+ <source>
+ <![CDATA[
+##############################################################
+################## AUXILIARY CACHES AVAILABLE ################
+# MYSQL disk cache
+jcs.auxiliary.MYSQL=org.apache.jcs.auxiliary.disk.jdbc.mysql.MySQLDiskCacheFactory
+jcs.auxiliary.MYSQL.attributes=org.apache.jcs.auxiliary.disk.jdbc.mysql.MySQLDiskCacheAttributes
+jcs.auxiliary.MYSQL.attributes.userName=sa
+jcs.auxiliary.MYSQL.attributes.password=
+jcs.auxiliary.MYSQL.attributes.url=jdbc:hsqldb:target/cache_hsql_db
+jcs.auxiliary.MYSQL.attributes.driverClassName=org.hsqldb.jdbcDriver
+jcs.auxiliary.MYSQL.attributes.tableName=JCS_STORE_MYSQL
+jcs.auxiliary.MYSQL.attributes.testBeforeInsert=false
+jcs.auxiliary.MYSQL.attributes.maxActive=15
+jcs.auxiliary.MYSQL.attributes.allowRemoveAll=true
+jcs.auxiliary.MYSQL.attributes.MaxPurgatorySize=10000000
+jcs.auxiliary.MYSQL.attributes.optimizationSchedule=12:34:56,02:34:54
+jcs.auxiliary.MYSQL.attributes.balkDuringOptimization=true
+jcs.auxiliary.MYSQL.attributes.EventQueueType=POOLED
+jcs.auxiliary.MYSQL.attributes.EventQueuePoolName=disk_cache_event_queue
+
+##############################################################
+################## OPTIONAL THREAD POOL CONFIGURATION #########
+# Disk Cache pool
+thread_pool.disk_cache_event_queue.useBoundary=false
+thread_pool.disk_cache_event_queue.boundarySize=500
+thread_pool.disk_cache_event_queue.maximumPoolSize=15
+thread_pool.disk_cache_event_queue.minimumPoolSize=10
+thread_pool.disk_cache_event_queue.keepAliveTime=3500
+thread_pool.disk_cache_event_queue.whenBlockedPolicy=RUN
+thread_pool.disk_cache_event_queue.startUpSize=10
+ ]]>
+ </source>
+ </subsection>
+
+ </section>
+ </body>
+</document>
\ No newline at end of file
Modified: jakarta/jcs/trunk/xdocs/RemoteAuxCache.xml
URL:
http://svn.apache.org/viewvc/jakarta/jcs/trunk/xdocs/RemoteAuxCache.xml?rev=421961&r1=421960&r2=421961&view=diff
==============================================================================
--- jakarta/jcs/trunk/xdocs/RemoteAuxCache.xml (original)
+++ jakarta/jcs/trunk/xdocs/RemoteAuxCache.xml Fri Jul 14 10:44:36 2006
@@ -1,113 +1,135 @@
<?xml version="1.0"?>
<document>
- <properties>
- <title>Remote Auxiliary Cache Client / Server</title>
- <author email="[EMAIL PROTECTED]">Pete Kazmier</author>
- <author email="[EMAIL PROTECTED]">Aaron Smuts</author>
- </properties>
+ <properties>
+ <title>Remote Auxiliary Cache Client / Server</title>
+ <author email="[EMAIL PROTECTED]">Pete Kazmier</author>
+ <author email="[EMAIL PROTECTED]">Aaron Smuts</author>
+ </properties>
- <body>
- <section name="Remote Auxiliary Cache Client / Server">
- <p>
- The Remote Auxiliary Cache is an optional plug in for JCS. It
- is intended for use in multi-tiered systems to maintain cache
- consistency. It uses a highly reliable RMI client server
- framework that currently allows for any number of clients. Using a
- listener id allows multiple clients running on the same machine
- to connect to the remote cache server. All cache regions on one
- client share a listener per auxiliary, but register separately.
- This minimizes the number of connections necessary and still
- avoids unnecessary updates for regions that are not configured
- to use the remote cache.
- </p>
- <p>
- Local remote cache clients connect to the remote cache on a
- configurable port and register a listener to receive cache
- update callbacks at a configurable port.
- </p>
- <p>
- If there is an error connecting to the remote server or if an
- error occurs in transmission, the client will retry for a
- configurable number of tires before moving into a
- failover-recovery mode. If failover servers are configured the
- remote cache clients will try to register with other failover
- servers in a sequential order. If a connection is made, the
- client will broadcast all relevant cache updates to the failover
- server while trying periodically to reconnect with the primary
- server. If there are no failovers configured the client will
- move into a zombie mode while it tries to re-establish the
- connection. By default, the cache clients run in an optimistic
- mode and the failure of the communication channel is detected by
- an attempted update to the server. A pessimistic mode is
- configurable so that the clients will engage in active status
- checks.
- </p>
- <p>
- The remote cache server broadcasts updates to listeners other
- than the originating source. If the remote cache fails to
- propagate an update to a client, it will retry for a
- configurable number of tries before de-registering the client.
- </p>
- <p>
- The cache hub communicates with a facade that implements a
- zombie pattern (balking facade) to prevent blocking. Puts and
- removals are queued and occur asynchronously in the background.
- Get requests are synchronous and can potentially block if there
- is a communication problem.
- </p>
- <p>
- By default client updates are light weight. The client
- listeners are configured to remove elements form the local cache
- when there is a put order from the remote. This allows the
- client memory store to control the memory size algorithm from
- local usage, rather than having the usage patterns dictated by
- the usage patterns in the system at large.
- </p>
- <p>
- When using a remote cache the local cache hub will propagate
- elements in regions configured for the remote cache if the
- element attributes specify that the item to be cached can be
- sent remotely. By default there are no remote restrictions on
- elements and the region will dictate the behavior. The order of
- auxiliary requests is dictated by the order in the configuration
- file. The examples are configured to look in memory, then disk,
- then remote caches. Most elements will only be retrieved from
- the remote cache once, when they are not in memory or disk and
- are first requested, or after they have been invalidated.
- </p>
- <subsection name="Client Configuration">
- <p>
- The configuration is fairly straightforward and is done in the
- auxiliary cache section of the <code>cache.ccf</code>
- configuration file. In the example below, I created a Remote
- Auxiliary Cache Client referenced by <code>RFailover</code>.
- </p>
- <p>
- This auxiliary cache will use <code>localhost:1102</code> as
- its primary remote cache server and will attempt to failover
- to <code>localhost:1103</code> if the primary is down.
- </p>
- <p>
- Setting <code>RemoveUponRemotePut</code> to <code>false</code>
- would cause remote puts to be translated into put requests to
- the client region. By default it is <code>true</code>,
- causing remote put requests to be issued as removes at the
- client level. For groups the put request functions slightly
- differently: the item will be removed, since it is no longer
- valid in its current form, but the list of group elements will
- be updated. This way the client can maintain the complete
- list of group elements without the burden of storing all of
- the referenced elements. Session distribution works in this
- half-lazy replication mode.
- </p>
- <p>
- Setting <code>GetOnly</code> to <code>true</code> would cause
- the remote cache client to stop propagating updates to the
- remote server, while continuing to get items from the remote
- store.
- </p>
- <source><![CDATA[
+ <body>
+ <section name="Remote Auxiliary Cache Client / Server">
+ <p>
+ The Remote Auxiliary Cache is an optional plug
in for
+ JCS. It is intended for use in multi-tiered
systems to
+ maintain cache consistency. It uses a highly
reliable
+ RMI client server framework that currently
allows for
+ any number of clients. Using a listener id
allows
+ multiple clients running on the same machine to
connect
+ to the remote cache server. All cache regions
on one
+ client share a listener per auxiliary, but
register
+ separately. This minimizes the number of
connections
+ necessary and still avoids unnecessary updates
for
+ regions that are not configured to use the
remote cache.
+ </p>
+ <p>
+ Local remote cache clients connect to the
remote cache
+ on a configurable port and register a listener
to
+ receive cache update callbacks at a
configurable port.
+ </p>
+ <p>
+ If there is an error connecting to the remote
server or
+ if an error occurs in transmission, the client
will
+ retry for a configurable number of tires before
moving
+ into a failover-recovery mode. If failover
servers are
+ configured the remote cache clients will try to
register
+ with other failover servers in a sequential
order. If a
+ connection is made, the client will broadcast
all
+ relevant cache updates to the failover server
while
+ trying periodically to reconnect with the
primary
+ server. If there are no failovers configured
the client
+ will move into a zombie mode while it tries to
+ re-establish the connection. By default, the
cache
+ clients run in an optimistic mode and the
failure of the
+ communication channel is detected by an
attempted update
+ to the server. A pessimistic mode is
configurable so
+ that the clients will engage in active status
checks.
+ </p>
+ <p>
+ The remote cache server broadcasts updates to
listeners
+ other than the originating source. If the
remote cache
+ fails to propagate an update to a client, it
will retry
+ for a configurable number of tries before
de-registering
+ the client.
+ </p>
+ <p>
+ The cache hub communicates with a facade that
implements
+ a zombie pattern (balking facade) to prevent
blocking.
+ Puts and removals are queued and occur
asynchronously in
+ the background. Get requests are synchronous
and can
+ potentially block if there is a communication
problem.
+ </p>
+ <p>
+ By default client updates are light weight. The
client
+ listeners are configured to remove elements
form the
+ local cache when there is a put order from the
remote.
+ This allows the client memory store to control
the
+ memory size algorithm from local usage, rather
than
+ having the usage patterns dictated by the usage
patterns
+ in the system at large.
+ </p>
+ <p>
+ When using a remote cache the local cache hub
will
+ propagate elements in regions configured for
the remote
+ cache if the element attributes specify that
the item to
+ be cached can be sent remotely. By default
there are no
+ remote restrictions on elements and the region
will
+ dictate the behavior. The order of auxiliary
requests is
+ dictated by the order in the configuration
file. The
+ examples are configured to look in memory, then
disk,
+ then remote caches. Most elements will only be
retrieved
+ from the remote cache once, when they are not
in memory
+ or disk and are first requested, or after they
have been
+ invalidated.
+ </p>
+ <subsection name="Client Configuration">
+ <p>
+ The configuration is fairly
straightforward and is
+ done in the auxiliary cache section of
the
+ <code>cache.ccf</code>
+ configuration file. In the example
below, I created
+ a Remote Auxiliary Cache Client
referenced by
+ <code>RFailover</code>
+ .
+ </p>
+ <p>
+ This auxiliary cache will use
+ <code>localhost:1102</code>
+ as its primary remote cache server and
will attempt
+ to failover to
+ <code>localhost:1103</code>
+ if the primary is down.
+ </p>
+ <p>
+ Setting
+ <code>RemoveUponRemotePut</code>
+ to
+ <code>false</code>
+ would cause remote puts to be
translated into put
+ requests to the client region. By
default it is
+ <code>true</code>
+ , causing remote put requests to be
issued as
+ removes at the client level. For groups
the put
+ request functions slightly differently:
the item
+ will be removed, since it is no longer
valid in its
+ current form, but the list of group
elements will be
+ updated. This way the client can
maintain the
+ complete list of group elements without
the burden
+ of storing all of the referenced
elements. Session
+ distribution works in this half-lazy
replication
+ mode.
+ </p>
+ <p>
+ Setting
+ <code>GetOnly</code>
+ to
+ <code>true</code>
+ would cause the remote cache client to
stop
+ propagating updates to the remote
server, while
+ continuing to get items from the remote
store.
+ </p>
+ <source>
+ <![CDATA[
# Remote RMI Cache set up to failover
jcs.auxiliary.RFailover=
org.apache.jcs.auxiliary.remote.RemoteCacheFactory
@@ -117,12 +139,14 @@
localhost:1102,localhost:1103
jcs.auxiliary.RC.attributes.RemoveUponRemotePut=true
jcs.auxiliary.RFailover.attributes.GetOnly=false
- ]]></source>
- <p>
- This cache region is setup to use a disk cache and the remote
- cache configured above:
- </p>
- <source><![CDATA[
+ ]]>
+ </source>
+ <p>
+ This cache region is setup to use a
disk cache and
+ the remote cache configured above:
+ </p>
+ <source>
+ <![CDATA[
#Regions preconfirgured for caching
jcs.region.testCache1=DC,RFailover
jcs.region.testCache1.cacheattributes=
@@ -130,27 +154,37 @@
jcs.region.testCache1.cacheattributes.MaxObjects=1000
jcs.region.testCache1.cacheattributes.MemoryCacheName=
org.apache.jcs.engine.memory.lru.LRUMemoryCache
- ]]></source>
- </subsection>
- <subsection name="Server Configuration">
- <p>
- The remote cache configuration is growing. For now, the
- configuration is done at the top of the
- <code>remote.cache.ccf</code> file. The
- <code>startRemoteCache</code> script passes the configuration
- file name to the server when it starts up. The configuration
- parameters below will create a remote cache server that
- listens to port <code>1102</code> and performs call backs on
- the <code>remote.cache.service.port</code>, also specified as
- port <code>1102</code>.
- </p>
- <p>
- The tomcat configuration section is evolving. If
- <code>remote.tomcat.on</code> is set to <code>true</code> an
- embedded tomcat server will run within the remote cache,
- allowing the use of management servlets.
- </p>
- <source><![CDATA[
+ ]]>
+ </source>
+ </subsection>
+ <subsection name="Server Configuration">
+ <p>
+ The remote cache configuration is
growing. For now,
+ the configuration is done at the top of
the
+ <code>remote.cache.ccf</code>
+ file. The
+ <code>startRemoteCache</code>
+ script passes the configuration file
name to the
+ server when it starts up. The
configuration
+ parameters below will create a remote
cache server
+ that listens to port
+ <code>1102</code>
+ and performs call backs on the
+ <code>remote.cache.service.port</code>
+ , also specified as port
+ <code>1102</code>
+ .
+ </p>
+ <p>
+ The tomcat configuration section is
evolving. If
+ <code>remote.tomcat.on</code>
+ is set to
+ <code>true</code>
+ an embedded tomcat server will run
within the remote
+ cache, allowing the use of management
servlets.
+ </p>
+ <source>
+ <![CDATA[
# Registry used to register and provide the
# IRemoteCacheService service.
registry.host=localhost
@@ -159,26 +193,32 @@
remote.cache.service.port=1102
# cluster setting
remote.cluster.LocalClusterConsistency=true
- ]]></source>
- <p>
- Remote servers can be chainied (or clustered). This allows
- gets from local caches to be distributed between multiple
- remote servers. Since gets are the most common operation for
- caches, remote server chaining can help scale a caching solution.
- </p>
- <p>
- The <code>LocalClusterConsistency</code>
- setting tells the remote cache server if it should broadcast
- updates received from other cluster servers to registered
- local caches.
- </p>
- <p>
- To use remote server clustering, the remote cache will have to
- be told what regions to cluster. The configuration below will
- cluster all non-preconfigured regions with
- <code>RCluster1</code>.
- </p>
- <source><![CDATA[
+ ]]>
+ </source>
+ <p>
+ Remote servers can be chainied (or
clustered). This
+ allows gets from local caches to be
distributed
+ between multiple remote servers. Since
gets are the
+ most common operation for caches,
remote server
+ chaining can help scale a caching
solution.
+ </p>
+ <p>
+ The
+ <code>LocalClusterConsistency</code>
+ setting tells the remote cache server
if it should
+ broadcast updates received from other
cluster
+ servers to registered local caches.
+ </p>
+ <p>
+ To use remote server clustering, the
remote cache
+ will have to be told what regions to
cluster. The
+ configuration below will cluster all
+ non-preconfigured regions with
+ <code>RCluster1</code>
+ .
+ </p>
+ <source>
+ <![CDATA[
# sets the default aux value for any non configured caches
jcs.default=DC,RCluster1
jcs.default.cacheattributes=
@@ -193,38 +233,48 @@
jcs.auxiliary.RCluster1.attributes.RemoveUponRemotePut=false
jcs.auxiliary.RCluster1.attributes.ClusterServers=localhost:1103
jcs.auxiliary.RCluster1.attributes.GetOnly=false
- ]]></source>
- <p>
- RCluster1 is configured to talk to
- a remote server at <code>localhost:1103</code>. Additional
- servers can be added in a comma separated list.
- </p>
- <p>
- If we startup another remote server listening to port 1103,
- (ServerB) then we can have that server talk to the server we have
- been configuring, listening at 1102 (ServerA). This would allow us
- to set some local caches to talk to ServerA and some to talk
- to ServerB. The two remote servers will broadcast
- all puts and removes between themselves, and the get requests
- from local caches could be divided. The local caches do not
- need to know anything about the server chaining configuration,
- unless you want to use a standby, or failover server.
- </p>
- <p>
- We could also use ServerB as a hot standby. This can be done in
- two ways. You could have all local caches point to ServerA as
- a primary and ServerB as a secondary. Alternatively, you can
- set ServerA as the primary for some local caches and ServerB for
- the primary for some others.
- </p>
- <p>
- The local cache configuration below uses ServerA as a primary and
- ServerB as a backup. More than one backup can be defined, but
- only one will be used at a time. If the cache is connected
- to any server except the primary, it will try to restore the
- primary connection indefinitely, at 20 second intervals.
- </p>
- <source><![CDATA[
+ ]]>
+ </source>
+ <p>
+ RCluster1 is configured to talk to a
remote server
+ at
+ <code>localhost:1103</code>
+ . Additional servers can be added in a
comma
+ separated list.
+ </p>
+ <p>
+ If we startup another remote server
listening to
+ port 1103, (ServerB) then we can have
that server
+ talk to the server we have been
configuring,
+ listening at 1102 (ServerA). This would
allow us to
+ set some local caches to talk to
ServerA and some to
+ talk to ServerB. The two remote servers
will
+ broadcast all puts and removes between
themselves,
+ and the get requests from local caches
could be
+ divided. The local caches do not need
to know
+ anything about the server chaining
configuration,
+ unless you want to use a standby, or
failover
+ server.
+ </p>
+ <p>
+ We could also use ServerB as a hot
standby. This can
+ be done in two ways. You could have all
local caches
+ point to ServerA as a primary and
ServerB as a
+ secondary. Alternatively, you can set
ServerA as the
+ primary for some local caches and
ServerB for the
+ primary for some others.
+ </p>
+ <p>
+ The local cache configuration below
uses ServerA as
+ a primary and ServerB as a backup. More
than one
+ backup can be defined, but only one
will be used at
+ a time. If the cache is connected to
any server
+ except the primary, it will try to
restore the
+ primary connection indefinitely, at 20
second
+ intervals.
+ </p>
+ <source>
+ <![CDATA[
# Remote RMI Cache set up to failover
jcs.auxiliary.RFailover=
org.apache.jcs.auxiliary.remote.RemoteCacheFactory
@@ -234,15 +284,9 @@
localhost:1102,localhost:1103
jcs.auxiliary.RC.attributes.RemoveUponRemotePut=true
jcs.auxiliary.RFailover.attributes.GetOnly=false
- ]]></source>
- <p>
- Note: Since, as of now, the remote cluster servers do not attempt to
get items
- from each other, when the primary server comes up, if it does not
- have a disk store, it will be cold. When clustered gets are enable
- or when we have a load all on startup option, this problem
- will be solved.
- </p>
- </subsection>
- </section>
- </body>
+ ]]>
+ </source>
+ </subsection>
+ </section>
+ </body>
</document>
Modified: jakarta/jcs/trunk/xdocs/navigation.xml
URL:
http://svn.apache.org/viewvc/jakarta/jcs/trunk/xdocs/navigation.xml?rev=421961&r1=421960&r2=421961&view=diff
==============================================================================
--- jakarta/jcs/trunk/xdocs/navigation.xml (original)
+++ jakarta/jcs/trunk/xdocs/navigation.xml Fri Jul 14 10:44:36 2006
@@ -49,6 +49,8 @@
href="/JDBCDiskCache.html" />
<item name="JDBC Disk Properties"
href="/JDBCDiskCacheProperties.html" />
+ <item name="MySQL Disk Properties"
+ href="/MySQLDiskCacheProperties.html" />
<item name="Remote Cache"
href="/RemoteAuxCache.html" />
<item name="Remote Cache Properties"
href="/RemoteCacheProperties.html" />
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]