Ciao!

I'm on the modper digest list so pardon the delayed response to your
input.

>>>>> "Aaron" == Aaron E Ross <[EMAIL PROTECTED]> writes:

    Aaron>   I've been working with a mod_perl based XML-RPC/SOAP
    Aaron>   service for a few months now and I thought I'd share some
    Aaron>   quick thoughts.

Thanks!  I'll take 'um.
 
    Aaron>   As long as you are _sure_ that you won't be writing data,
    Aaron>   in-memory will be fast and easy to code.  if you use
    Aaron>   objects you can pretty easily build an interface
    Aaron>   encouraging programmers to avoid modifying shared data.

Yes, I refuse to do *any* data arbitration between the localized
portions of the data (if any?) and the remote or official copies of it.
Copying it to a local store is meant to remove the access latency and
availability of the data, although I have not yet determined there to be
latency issues.  There are however availability issues surround these
databases.  I agree, an OO interface to it will facilitate its use and
certainly simplify any data format changes that are almost certain to
take place over the life of the service itself.
 
    Aaron>   Aim for what you may need later, 5+ will be easy as long as
    Aaron>   you have some memory.

I'm lobbying for excessive memory as we speak!  Good point.
 
    Aaron>   I have found the Cache::Cache modules really easy to use,
    Aaron>   well written, documented and supported.  MLDBM::Sync
    Aaron>   provides some locking, but if you really need concurrent
    Aaron>   access I would highly recommend BerkeleyDB,
    Aaron>   http://sleepycat.com/, nb this is _not_ DB_File.

Mr. Turner also mentioned the Cache::* modules for this.  I'm not really
interested in locking the data in any way whatsoever.  It's strictly
read-only so I see no reason to have to manage read locks.  Therefore
concurrent access to an in-memory data store between all the modperl
processes should be no problem, correct?

    Aaron>   You haven't explained the cacheing logic at all.. perhaps
    Aaron>   you don't need a cache? maybe just an object that reads
    Aaron>   from the data stores on startup?

This is a good idea.  It would eliminate the interim step of loading the
data onto the local machines prior to bringing it into memory.  I'm
working on data access issues this week and want to be able to describe
the data better as well as gain query access to it.  Then I should be
able to benchmark access and availability, my two primary concerns with
the data.

    Aaron>   If you do need a cacheing layer that updates on some event
    Aaron>   or expiration, remember to seperate the cacheing logic and
    Aaron>   the storage as much as possible to be able to tune and to
    Aaron>   scale up later on.

I'll keep this in mind.
 
    Aaron>   Why not write a simple object? I try to avoid tie's, b/c
    Aaron>   they are too "magical" for my taste, but i don't think
    Aaron>   there is any inherent overhead.

The object description of the data seems to be a good way to go.  I
believe this will end up being the API I present to the handler for data
access.  I agree.

    Aaron>   I would recommend using SOAP::Lite for both XML-RPC and
    Aaron>   SOAP. While the code is unreadable, the author is
    Aaron>   responsive and helpful, and the switch from XML-RPC to SOAP
    Aaron>   couldn't be easier.

You got the unreadable part right (c:  Sorry.  I've had occasion to use
Randy Ray's RPC-XML module in the past and it functioned very well.  He
describes it as a reference implementation of the XML-RPC specification
and does not attest to its effeciency or speed.  I'll do some comparison
shopping here with SOAP::Lite and see what comes out.

    Aaron>   Hope this helps, Aaron

Thanks for your insight into this matter.  I really appreciate your
input.

Peace.

Reply via email to