Michael Bayer wrote:
> You're on the right track.  The reflection methods are always called  
> with a Connection that is not shared among any other thread  
> (connections aren't considered to be threadsafe in any case) so  
> threadsafety is not a concern.
> 
> I think you should look at the mysql dialect sooner rather than later,  
> as its going to present a challenge to the overall notion here.  It  
> has a much more elaborate design already and you're going to want to  
> cache the results of SHOW CREATE TABLE somewhere, and that would be  
> better to be *not* cached on the Connection itself, since that is  
> something which can change over the lifespan of a single connection.   
> I think for the PG _get_table_oid method that needs similar treatment,  
> i.e. that its called only once per reflection operation.

Thanks.  I will look at MySQL next.  Thus far, I've refactored PG and 
MSSQL.  I was able to factor out a good bit of similar code between the 
two and all unit tests are passing, though MSSQL coverage looks a bit 
thin so I need to do some more checks there.

> 
> We have a nice decorator that can be used in some cases called  
> @connection_memoize(), but for per-table information I think caching  
> info on the Connection is too long-lived.  I had the notion that the  
> Inspector() object itself might be passed to the more granular  
> reflection methods, where each method could store whatever state it  
> needed into a dictionary like inspector.info_cache - or, perhaps we  
> just pass a dictionary to each method that isn't necessarily tied to  
> anything.    This because I think each dialect has a very different  
> way of piecing together the various elements of Table and Column  
> objects, we'd like to issue as little SQL as possible, and we are  
> looking to build a fine-grained interface.   The methods can therefore  
> do whatever queries they need and store any kind of structure per  
> table in the cache, and then retrieve the information when the new  
> Dialect methods are called.   This means the call for foreign_keys,  
> primary_keys, and columns when using the mysql dialect would issue one  
> SHOW CREATE TABLE, cache all the data, and return what's requested for  
> each call.

I think I follow you.  I haven't started on the public API (Inspector) 
yet, but I think I can implement this strategy in the refactoring. 
reflecttable will create a cache (dict info_cache) and pass that cache 
into each of the more granular reflection methods.  Those methods can 
look for, use and store cached data that will only exists through the 
execution of reflecttable. Then, when I implement Inspector, it will do 
the same and have full control over the life of the cache.  So I'll have 
no need to set state on the connection.


> 
> With the caching approach, multiple queries with the same  
> reflection.Inspector object can be made to work nearly as efficiently  
> as the coarse-grained reflecttable() methods do now.

Certainly.  There should be no reason for reduced performance.


Just to make sure we're considering the same plan, I don't plan to make 
any API changes that would cause breakage.  All changes are additions 
including the public API and some new dialect methods (get_views, 
get_indexes, ...).  Most of the work as I see it is in refactoring and 
testing.

--Randall


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to