Re: Cache::Cache locking
On Sun, 23 Dec 2001 04:23:47 +0900 Tatsuhiko Miyagawa <[EMAIL PROTECTED]> wrote: > > Apache::Singleton::Server got me thinking about Cache::Cache > > and locking again. if i'm going to have a server-global > > object, i am going to need to protect against multiple > > processes updating it simultaneously, right? > > Right. Which makes me remember current Apache::Singleton::Server is > complete broken :( it doesn't update changed attribute to shared data > in IPC ... would fix. Apache::Singleton 0.04 is now going to CPAN, without Server subclass, which was broken, and in fact I don't need for my production environment ;) If you want Server implementation (with sufficient speed and robust locking), I'm always open to patches! 0.04 Sun Dec 23 21:20:04 JST 2001 - Fixed docs - pulled off Server subclass: it was completely broken The URL http://bulknews.net/lib/archives/Apache-Singleton-0.04.tar.gz has entered CPAN as file: $CPAN/authors/id/M/MI/MIYAGAWA/Apache-Singleton-0.04.tar.gz size: 2675 bytes md5: 3865399d4d8a9b970fd71e2f048de8e3 -- Tatsuhiko
Re: Cache::Cache locking
At 10:55 PM 12/22/2001, brian moseley wrote: >Apache::Singleton::Server got me thinking about Cache::Cache >and locking again. if i'm going to have a server-global >object, i am going to need to protect against multiple >processes updating it simultaneously, right? > >we've already talked about this in regards to sessions. most >folks seem to feel that "last one wins" is sufficient for >session data. but what about for objects for which this >policy is not good enough? > >if locking is necessary in some instances, even if we can >only contrive theoretical examples right now, how might it >be done in a performant way, especially for objects that can >be modified multiple times while handling a single request? >seems like if you synchronized write access to the object >and caused each process to update its local copy after each >modification, you'd have a hell of a lot of serialization >and deserialization going on in each request. > >thoughts? Well, I think it depends on the situation. In Extropia::Session what we did was set up policies. The default policy is similar to Apache::Session. But we allow stronger policies if another application requiring more stringent care on the session data shares the user session handle and underlying data store. We ended up separating the concept into two seperate policies: a cache policy and a lock policy. Cache policies are things like no cache, cache reads, cache reads and writes (so nothing gets written until the object is destroyed or flushed manually). Lock policies include no locking (last wins), data store (the whole cache is locked because attributes may depend on each other), and attribute level locking (integrity is only maintained on the attribute write level). These "policies" effect a general policy of how Extropia::Session works. I think there are more sophisticated ways of doing an API than an arbitrary policy of course. In some cases, locking is something that should be settable directly. For example, I mentioned some attributes may depend on each other. For example, let's say a session stores an attribute indicating your savings account and another indicating your checking account. Obviously to perform a funds transfer within your session you'd want to wrap both attribute changes inside of a lock. Of course, this sort of lock can be separate from the session cache. But ideally in order to interact well with previously set session policies the locking that is automatic should be similar to the locking that is explicit. I think if I had to do it over, I would probably not have implemented my own Session and reused one of the newer caching mechanisms. One of the reasons I didn't go with Apache::Session is that I needed more sophistication than Apache::Session provided but I did like Apache::Session enough that we wrap around it and provide the extra session features I wanted. Later, Gunther
Re: Cache::Cache locking
On Sat, 22 Dec 2001 06:55:15 -0800 (PST) brian moseley <[EMAIL PROTECTED]> wrote: > Apache::Singleton::Server got me thinking about Cache::Cache > and locking again. if i'm going to have a server-global > object, i am going to need to protect against multiple > processes updating it simultaneously, right? Right. Which makes me remember current Apache::Singleton::Server is complete broken :( it doesn't update changed attribute to shared data in IPC ... would fix. -- Tatsuhiko Miyagawa
Re: Cache::Cache locking
At Sat, 22 Dec 2001 06:55:15 -0800 (PST) , brian moseley <[EMAIL PROTECTED]> wrote: >if locking is necessary in some instances, even if we can >only contrive theoretical examples right now, how might it >be done in a performant way, especially for objects that can >be modified multiple times while handling a single request? I see two basic ways the locking can be done. One is to create a get_for_update() method that holds an exclusive lock. You could still call get() for read-only access, but you would not automatically pick up changes from other processes and thus should not write any data that you read with a get(). This is dependent on people being careful to always call get_for_update() if they intend to update data. There is no extra serialization, and you could do record-level locking with most storage backends. The other way is to do optimistic locking with version numbers and throw an exception if someone else has updated the record you want to update. This is only useful for a certain class of apps, like data entry screens where collisions are rare and you can just tell the user to look at the new data and enter her changes again. - Perrin
Cache::Cache locking
Apache::Singleton::Server got me thinking about Cache::Cache and locking again. if i'm going to have a server-global object, i am going to need to protect against multiple processes updating it simultaneously, right? we've already talked about this in regards to sessions. most folks seem to feel that "last one wins" is sufficient for session data. but what about for objects for which this policy is not good enough? if locking is necessary in some instances, even if we can only contrive theoretical examples right now, how might it be done in a performant way, especially for objects that can be modified multiple times while handling a single request? seems like if you synchronized write access to the object and caused each process to update its local copy after each modification, you'd have a hell of a lot of serialization and deserialization going on in each request. thoughts?