On Jul 28, 2007, at 2:25 PM, MuTPu4 wrote:

> I want to have a global preloaded tree structure for fast access
> (frequent reads and infrequent modifications) with application's
> lifetime and ability to fetch Items as needed. I wanted to use some
> nice automation features of SA's ORM (lazy
> loads across relations, automatic DB updates etc) so I have come up
> with the following "solution":
>

> [code]
>   #application init
>   session = self.db.create_session( )
>   self.root =
> session.query( Category ).selectone( Category.c.parent_category_id ==
> None )
>   session.close( )
> [/code]
> [code]
>   #hypothetical code in one of the threads
>   session = self.db.create_session( )
>   new_obj = Category( 'test', directory.root )
>   session.save( new_obj )
>   session.flush( )
>   print new_obj.parent.name
>   len( new_obj.children )
>   subitems = session.query( Item ).select_by( parent_category_id =
> directory.root.category_id )
>   session.close( )
> [/code]
> Now I see that this approach is not viable as-is.

theres nothing at all wrong with the above; the only adjustments Id  
make are to a. leave a "global" session open for your directory.root,  
and b. remove all "cascade" rules from the "parent" relation (in this  
case using backref=backref('parent', cascade=None), so that when you  
attach the out-of-session Category as a 'parent' to your new  
Category, the child category can get a parent_id value but the parent  
Category object is otherwise unaffected.

If that produces problems, the more "official" way to go about it is  
to merge() the parent Category into your current session:

parent = session.merge(directory.root)

new_obj = Category('test', parent)
...

the merge() operation will create a copy of "parent" within the local  
session.  It will usually issue a fetch operation from the database  
in order to get the most recent data for the particular Category.

After the flush opertation, you might also want to expire your  
directory.root in order for the new data to be visible:

global_session.expire(directory.root)

Since in this case directory.root is being used as a global cache,  
the global session is the control point for that cache; the expire()  
operation "invalidates" the cache to notify that something has changed.

I should note that if you truly have extremely few write operations,  
and you dont mind other requests waiting for a write operation to  
finish, it *is* possible to produce application synchronization,  
typically using a threading.Condition, which allows any number of  
simultaneous readers, but can lock everyone out during a single write  
operation.  There is code available in the Beaker utility module used  
by Pylons which accomplishes this.  if that were the case, then you  
could truly use just one global Session shared among threads.  One  
reason this comes to mind is because a cache system generally needs  
to build synchronization of this nature (Beaker is a caching system).

> Maybe there is a way
> to adopt it using modules like SessionContext and assignmapper?

SessionContext and assignmapper are strictly typing-reducers; they  
grant no extra capabilties.

> 1. Just dont use permanent objects and fetch an appropriate portions
> of informations at every request relaying on DB's cache and loopback
> network optimizations.

this is what SA is currently more tailored towards; being able to  
load data fresh from the DB with as few queries as possible, thereby  
avoiding the "data freshness" issues that come when second-level  
caches are introduced.  What youre really trying to do here is dual- 
purpose the Session as a second level cache, which is possible as I  
outlined using merge(), but its not completely ideal.   SA will  
eventually implement a "real" second level caching system; it will  
involve producing new object instances on the fly which are bound to  
a particular session; just the actual data from those objects would  
be cached instead of being fetched from the DB (and it also will use  
synchronization similar to that outlined above).



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to