Looks interesting, is this class you wrote an extension of the existing classes/interfaces? If this is the case then contributing is simple. You post a copy of your code to the commons-dev mailing list.
I or another committer will review and commit the code. (I assume you have some JUnit tests showing all the nice features)


All contributed code must be licensed under the apache license (please add the ASF header to new files) with no other copyrights (remove your own copyright from the copy you are submitting).
The version you submit is then owned by the ASF (licensed under the ASF 2.0 license) and you will be listed as contributer on the cache website.


Cheers
Dirk


Matthias Nott wrote:

Y'all,

I've here some Cache class code that I've written and used over
the last two years that I'd like to contribute. Features are listed below. I wonder whether this may fit into commons cache,
and if so, drop me a note.


Thanks and keep up the good work,

M

/**
 * This class provides the user with an object cache.
 * <p>
 * This cache has a general structure of an optimized queue (FIFO). The
 * following rules apply:
 * <ul>
 * <li>When an entry is added to the cache, it is added to the head of the
 *     queue.</li>
 * <li>If the maximum number of entries in the queue is reached, adding an
 *     entry to the head at the same time moves out one object from the
tail.
 *     The cache size can be controlled by the setCacheSize function. The
 *     default cache size is 1000 entries.</li>
 * <li>Hitting any entry in the cache moves it back to the head of the
queue.
 *     This way, more frequently accessed entries are more likely to remain in
the
 *     cache.</li>
 * <li>When a maximum number of cache modification has been reached, the
entire
 *     cache is flushed. This can be controlled by the setResetAfter
function.
 *     The default threshold is 1000000 structural cache modifications.</li>
 * <li>Each cache entry gets a standard lifetime which is "forever" if you
do
 *     not use the setLifetime function. If you use that function or choose
 *     a particular setting for one given entry using the more detailed
 *     version of the setEntry function, the entries will be cleaned up
 *     either by a cleaner thread that is started automatically when you
 *     instantiate the cache, or by the fact that you access an expired
 *     item before the next run of the cleaner thread took place. The
 *     default interval for the cleaner is 1 minute. This can be controlled
 *     by the setUpdateInterval function.</li>
 * <li>Each cache entry is encapsulated in a "SoftReference". This means,
 *     that when running into an out of memory situation, some entries
 *     may be moved out of the cache by the garbage collector. As you
 *     have no control whatsoever over which entries are removed, it
 *     is important to understand that <b>you must not rely on the fact
 *     that an entry is in the cache only because you had just put it
there.</b>
 *     Each time you access the cache using the getEntry function, you must
 *     check whether you received a null object, and if so, you have to
 *     take care yourself for recreating the object from scratch - and for
 *     adding it back to the cache if you want.</li>
 * <li>It is possible to compress the cache content if you have huge but
 *     redundant objects to store. To do so, use the setCompressed function
 *     to activate this setting for the entire cache, or activate it only
 *     for single calls to setEntry in its detailed form. When getting
 *     an object from the cache, the cache takes care of decompressing
 *     the entry, so you do not have to worry about whether any entry had
 *     been compressed previously. Please make sure to note that the
 *     compression takes place in memory, and may imply a severe load
 *     on the CPU.</li>
 * <li>When compressing, it is possible to activate a double buffer
 *     for single entries or for the entire cache using the setDoublebuffer
 *     function. When doing so, storing and hitting a cache entry will
 *     automatically try to maintain a SoftReference to the uncompressed
 *     form, thus considerably speeding up subsequent hits to the same
 *     entry. The downside is, however, that this decompressed entry
 *     now competes for memory space with all other cache entries,
 *     whether compressed or not. So make sure to monitor, e.g. using
 *     the getCacheHit  vs. getCacheMiss functions, whether the setting
 *     renders the entire cache inactive such that hitting a huge cache
 *     entry forces all other cache entries out of the cache immediately.
 *     The cache statistics can be reset by the resetStatistics
function.</li>
 * <li>It is very probable that using the double buffer and the compression
 *     as an overall cache setting does not make sense. All tests that have
 *     been untertaken so far show that the doublebuffer takes up at least
 *     the space, if not more, that the uncompressed objects would take,
 *     thus moving objects out of the cache that should have been kept
 *     there. On the other hand, double buffering and compressing makes
 *     a lot of sense to use on isolated cache entries. Think of getting
 *     a list of values that <i>may</i> be bigger than a given threshold
 *     that you define; in this case, you use the detailed version of the
 *     setEntry function and store only this entry in a compressed form,
 *     activating the double buffer at the same time. If the object is
 *     accessed frequently, the double buffer will eliminate subsequent
 *     decompressions and trade this for space; if the object is accessed
 *     rarely, the double buffer will more likely be prone to be moved out of
the
 *     cache by the garbage collector, as it can be considered to take more
space
 *     as opposed to the "other" entries.</li>
 * <li>It is possible to persist a cache on disk using the save function.
 *     The cache is compressed when doing so. See the documentation for
 *     that function vs. the load function for considerations to make
 *     that affect the compression, doublebuffer and lifetime settings
 *     of a given cache entry if these are not inline with the overall
 *     settings of the cache. Basically, saving and restoring the cache
 *     applies the overall settings to all objects in the cache; in
 *     addition, due to the many factors that may influence the content
 *     of the cache, you cannot rely on the assumption that loading a cache
 *     will lead to exactly the same content as it had when it was saved.
 *     Loading the cache adds the entries back to the cache as fresh
 *     entries, with the cache wide compression and double buffering
 *     setting, so already while loading the cache, the garbage collector
 *     may step into the place and remove objects from the cache if
 *     running into an out of memory situation.</li>
 * <li>Particular emphasis was placed on making this cache thread safe.
Every
 *     structural modification is synchronized. Since some modifications are
 *     detectable only when accessing the cache, the accessors are
 *     synchronized, as well, as when they detect the invalidity of
 *     a SoftReference pointer, they immediately move that SoftReference
 *     object out of the cache, as well.</li>
 * </ol>
 * @y.author Copyright (c) 2003 Matthias Nott
 */



--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to