On Nov 7, 2008, at 11:13 PM, Randall Smith wrote:

>
> reflecttable is very monolithic.  Breaking it down into smaller
> components will make it easier to test.  I follow you on adding  
> methods
> to the Dialects and that seems like a good place for them.

> I don't know
> what kind of API sqlalchemy.engine.reflection would have. Is this
> something you could sketch out?

well it would start by reflection.py being fairly simple and just  
presenting a straightforward way of calling into the dialect for  
information needed, such as dialect.table_names() and such.   It would  
include ways to make calls into these functions using a specific  
Connection instance, or in an "autoconnecting" style, which suggests  
that calls to the API might look like  
"reflection 
.Inspector(connection=some_connection_or_None).table_names".   Giving  
it some state like that would also allow it to cache the results of  
calls.

The API would progress to a point where it does act like somewhat like  
what information schema originally promised, although it would be  
considerably more coarse grained.   It would need to return records  
that represent tables, columns, indexes, contraints, and everything  
else.   It might be nice if it supported a "generative" style like  
Query does (im really thinking almost a little like jQuery, in fact),  
allowing phrases like  
"Inspector(conn).table('sometable').constraints(type='foreignkey',  
references='foocol')".   Internally, filtering operations like  
"references='foocol'" might be performed by fetching all foreign keys  
for table 'sometable' and doing an in-python filter for the specific  
record requested - if you look at the extremely minimal options MySQL  
gives us for getting foreign key info (namely, we have to regexp the  
output of SHOW CREATE TABLE), you'll see how this is necessary.   I  
think a given Inspector can assume that all data can be cached.

Its a little up in the air what kinds of records the Inspector would  
return, either sqlalchemy.schema.* constructs, or some more  
lightweight "Record" objects, but I think it would have to be the  
latter (i.e. lightweight Record objects).   Since consider if we  
returned sqlalchemy.schema.* objects, these currently assume a  
"composed" structure, i.e. tables have columns, foreign keys have a  
column, columns have a parent table and foreign keys, etc., and using  
them would imply that the reflection API is more like a "mapped  
structure" where you get back a Table, then Table.columns loads in the  
column objects, etc.   Which is interesting in that its almost like  
Table, Column etc. become mapped objects, but I think this would be  
reaching too far for now....also the API should introduce minimal  
performance overhead.

The structure of the API would drive the current reflection API to  
become more componentized.  What we see as a need on the "public  
refleciton API" side would drive the currently monolithic "reflection"  
methods to be changed.   The current reflecttable() methods in turn  
would ideally move *out* of the dialects, and the Table(autoload=True)  
functionality would simply call up an Inspector and use that.  So the  
ultimate outcome is that the individual "reflecttable" methods go  
away, and dialects implement a larger set of simpler methods.

Its a big job and to do it really well would take a lot of effort and  
iteration.   But it would totally rock.

>
> A project like SA needs access to all of the supported database  
> systems
> for testing.  Currently, it seems that developers are hampered because
> they don't have access to these systems.  So I was thinking of ways to
> improve the situation and the idea I like the best is a VPN consisting
> of donated database setups.  There would be a central VPN host and
> donors would connect their donated database servers to this host.
> Developers could then connect to the VPN host and access all of the
> database systems.  With proper firewalling (iptables rules on the
> tun/tap devices) it should be safe for all parties.

we already have a solution for this issue, its just in need of more  
volunteers to contribute working test environments - its the Python  
buildbot.  The current master is here:

http://pylonshq.com:8012/

As you can see on the far right, someone at some point contributed an  
MS-SQL bot but they never got it working (and oddly enough we don't  
even know whos server that is - Ben Bangert administrates the build  
master and he said he didn't know whose it was).   The unit tests need  
lots of tweaks so that they all pass for MS-SQL as well as for Oracle,  
including marking tests as unsupported, adding ORDER BYs where needed,  
ensuring all tables have Sequences available, etc.   Michael Trier has  
been working on the MS-SQL side but he certainly could use more help.

So again its something we really need, and there's a clear path, but  
its just a ton of work.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to