At 6:04 PM -0500 12/14/01, Perrin Harkins wrote:
>That's actually a bit different.  That would fail to notice updates between
>processes until the in-memory cache was cleared.  Still very useful for
>read-only data or data that can be out of sync for some period though.


The primary problem with everything mentioned thus far is that they 
are almost entirely based around the concept of a single server. 
Caching schemes that are both fast and work across multiple servers 
and process instances are very hard to find.  After reading the eToys 
article, I decided that BerkeleyDB was worth a look.  We discovered 
that it was very fast and would make a great cache for some of our 
commonly used database queries.  The problem was that we are running 
on 5 different servers, load balanced by Big/IP.  Again, taking my 
que from the eToys article, I began working on a data exchange system 
(using perl instead of C) which multicasts data packets to the 
"listening" servers.  So far, our only problems have been with the 
deadlocking (solved with db_deadlock), and a few corrupt records.

I'm considering uploading to CPAN the stuff I've written for the 
caching, but haven't had the time to make it generic enough.  I also 
haven't performed any benchmarks other than the "Wow, that's a lot 
faster" benchmark.  One limitation to the stuff I've written is that 
the daemons (listeners) are non threaded, non forking.  That and it's 
all based on UDP.


Rob

--
When I used a Mac, they laughed because I had no command prompt. When 
I used Linux, they laughed because I had no GUI.  

Reply via email to