On May 4, 2010, at 1:59 PM, Chris Withers wrote:

>>> So putting non-expunged objects in something like a beaker cache would be a 
>>> no-no, correct? (would .close() or .remove() fix the problem if the objects 
>>> are already in the cache by the time the .close() or .remove() is called?)
>> in most cases its actually fine.    file, memcached, reldb, dbm backends all 
>> serialize the given object, which means you're only storing a copy.    If 
>> the cache is storing things locally to the current session (see 
>> examples/beaker_cache/local_session_caching.py), then you dont want to 
>> expunge the object since you'd like it to be in the session at the same 
>> time.  only in-memory, non-session-scoped caches have this limitation, such 
>> as if you were using a "memory" backend with beaker.
> 
> OK, thanks. Where can I find good examples of the various ways Beaker can be 
> used with a multi-threaded wsgi app?

those *are* the examples ...  examples/beaker_cache.   "multithreaded" doesn't 
change any of the code

> 
>>> Does the ORM check if the attributes of the cached object are correct or 
>>> would you end up in a situation where you do a query but end up using the 
>>> cached attributes rather than the ones just returned from the db?
>> that all depends how you get the object from a cache back into your session. 
>>    usually not since having to hit the DB to verify attributes defeats the 
>> purpose of a cache.   pretty much only if you used merge() with load=True.
> 
> I wasn't quite clear, let me try again. So, I've merged an object with 
> load=False. I then do a session.query(ThatObjectsClass).all() which should 
> include that object. Will the object have the correct attributes or the stale 
> cached ones?

it will have whatever you merged in from the outside (yes, the cache).  
merge(load=False) copies the incoming attributes unconditionally.  whatever 
attributes aren't present on the incoming will be loaded from the DB when 
accessed.

> 
>>> - They don't explain what happens to transactions and connections. The 
>>> points for both remove() and close() say "all of its 
>>> transactional/connection resources are closed out"; does this mean database 
>>> connections or closed or just returned to the pool? (I hope the latter!)
>> "closed out" means rolled back and returned to thoe pool.
>>> 
>>> - The point for .commit() states "The full state of the session is expired, 
>>> so that when the next web request is started, all data will be reloaded" 
>>> but your last reply implied this wouldn't always be the case.
>> The instantiated objects that are in the session still stay around as long 
>> as they are referenced externally.  but all their attributes are gone, as 
>> well as the "new" and "deleted" collections are empty.   so all data will be 
>> reloaded.
>>> 
>>> Also, is it fair game to assume that session.close() rolls back any open 
>>> database transaction? Is there any difference between that rollback and 
>>> calling session.rollback() explicitly?
>> i think the rollback which close() might get to the point more directly 
>> internally....but from a connection point of view there's no different.
> 
> Thanks for the clarification :-)
> 
>>> Finally, in nosing around session.py, I notice that SessionTransactions can 
>>> be used as context managers. Where can I find good examples of this?
>> you'd be saying "with session.begin():"
> 
> ...and then the session would be committed or rolled back depending on 
> whether an exception was raised in the "with" block or not?

well, it wouldn't be very useful if it didn't check for an exception, so yes, 
it does what you'd expect.


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to