[sqlalchemy] emulating spreadsheet in db

2008-11-08 Thread Faheem Mitha


Hi,

I've got some genome data, and I'm trying to move it into a db.

The data looks like

Patient   FOOSNPBARSNP ...
Tom   AA AT
John  AT AA
...

These columns correspond to SNPS 
(http://en.wikipedia.org/wiki/Single_nucleotide_polymorphism). Note there 
are a lot of columns, The question is what the best approach is to emulate 
such a setup. I could not find much information about this scenario. The 
approach I went for was to have the row info and column info in separate 
tables, and then have a linker table containing foreign keys pointing to a 
row, a column, and a SNP value.

Then this will look something like

PATIENT   SNP  SNPVAL
John  Foosnp   AA
Tom   Barsnp   AT

This essentially maps the cartesian product of patient and snp to snpval.

function: PATIENT x SNP - SNPVAL

Is this a reasonable way to approach this? If so, is there some way to 
tell the mapper what kind of relationship this table is trying to define?

 Thanks in advance, Faheem.

*

from sqlalchemy import *
from sqlalchemy.orm import *
from datetime import datetime

metadata = MetaData('sqlite:///data.sqlite')

patient_table = Table(
 'patient', metadata,
 Column('id', String(100), primary_key=True, index=True),
 Column('sex', String(1)),
 )

snp_table = Table(
 'snp', metadata,
 Column('name', String(20), nullable=False, primary_key=True),
 )

snpval_table = Table(
 'snpval', metadata,
Column('id', Integer, primary_key=True),
 Column('val', String(2), nullable=False),
 )

cell_table = Table(
 'patient_snpval', metadata,
 Column('patient',  None, ForeignKey('patient.id', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False),
 Column('snp',  None, ForeignKey('snp.name', onupdate='CASCADE', 
ondelete='CASCADE'),  index=True, nullable=False),
 Column('snpval',  None, ForeignKey('snpval.id', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False)
 )

metadata.create_all()

class Patient(object): pass
class Snp(object): pass
class Snpval(object): pass
class Cell(object): pass

mapper(Patient, patient_table)
mapper(Snp, snp_table)
mapper(Snpval, snpval_table)
mapper(Cell, cell_table)

Session = sessionmaker()
session = Session()
session.commit()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using deferred with declarative

2008-11-08 Thread Allen Bierbaum

We have been using the declarative successfully in our codebase for a
couple months now with 0.4.x, but we have just run into a problem.

We have a table we we want to map using declarative but we want to
have one of the columns be deferred because it contains binary data.
Unfortunately we can't figure out how to do this.

We tried something like this:

class Person(Base):
__tablename__ = 'people'
id = Column('id', Integer, primary_key=True)
data = Column('data', Binary)
__mapper_args__ = {'properties' : {'data' : sa.orm.deferred(data)}

The problem is that when the declarative metaclass assembles the
arguments to pass to the mapper creation method, this causes two
arguments of name 'properties' to be passed into the call.  (see
DeclarativeMeta.__init__)

- Is this a bug in the implementation of declarative?  (perhaps the
code in DeclarativeMeta.__init__ should look for
__mapper_args__['properties'] and merge it with the internally created
properties)
- Or is there some other way to use deferred columns with declarative?

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema inspection api

2008-11-08 Thread Michael Bayer


On Nov 7, 2008, at 11:13 PM, Randall Smith wrote:


 reflecttable is very monolithic.  Breaking it down into smaller
 components will make it easier to test.  I follow you on adding  
 methods
 to the Dialects and that seems like a good place for them.

 I don't know
 what kind of API sqlalchemy.engine.reflection would have. Is this
 something you could sketch out?

well it would start by reflection.py being fairly simple and just  
presenting a straightforward way of calling into the dialect for  
information needed, such as dialect.table_names() and such.   It would  
include ways to make calls into these functions using a specific  
Connection instance, or in an autoconnecting style, which suggests  
that calls to the API might look like  
reflection 
.Inspector(connection=some_connection_or_None).table_names.   Giving  
it some state like that would also allow it to cache the results of  
calls.

The API would progress to a point where it does act like somewhat like  
what information schema originally promised, although it would be  
considerably more coarse grained.   It would need to return records  
that represent tables, columns, indexes, contraints, and everything  
else.   It might be nice if it supported a generative style like  
Query does (im really thinking almost a little like jQuery, in fact),  
allowing phrases like  
Inspector(conn).table('sometable').constraints(type='foreignkey',  
references='foocol').   Internally, filtering operations like  
references='foocol' might be performed by fetching all foreign keys  
for table 'sometable' and doing an in-python filter for the specific  
record requested - if you look at the extremely minimal options MySQL  
gives us for getting foreign key info (namely, we have to regexp the  
output of SHOW CREATE TABLE), you'll see how this is necessary.   I  
think a given Inspector can assume that all data can be cached.

Its a little up in the air what kinds of records the Inspector would  
return, either sqlalchemy.schema.* constructs, or some more  
lightweight Record objects, but I think it would have to be the  
latter (i.e. lightweight Record objects).   Since consider if we  
returned sqlalchemy.schema.* objects, these currently assume a  
composed structure, i.e. tables have columns, foreign keys have a  
column, columns have a parent table and foreign keys, etc., and using  
them would imply that the reflection API is more like a mapped  
structure where you get back a Table, then Table.columns loads in the  
column objects, etc.   Which is interesting in that its almost like  
Table, Column etc. become mapped objects, but I think this would be  
reaching too far for nowalso the API should introduce minimal  
performance overhead.

The structure of the API would drive the current reflection API to  
become more componentized.  What we see as a need on the public  
refleciton API side would drive the currently monolithic reflection  
methods to be changed.   The current reflecttable() methods in turn  
would ideally move *out* of the dialects, and the Table(autoload=True)  
functionality would simply call up an Inspector and use that.  So the  
ultimate outcome is that the individual reflecttable methods go  
away, and dialects implement a larger set of simpler methods.

Its a big job and to do it really well would take a lot of effort and  
iteration.   But it would totally rock.


 A project like SA needs access to all of the supported database  
 systems
 for testing.  Currently, it seems that developers are hampered because
 they don't have access to these systems.  So I was thinking of ways to
 improve the situation and the idea I like the best is a VPN consisting
 of donated database setups.  There would be a central VPN host and
 donors would connect their donated database servers to this host.
 Developers could then connect to the VPN host and access all of the
 database systems.  With proper firewalling (iptables rules on the
 tun/tap devices) it should be safe for all parties.

we already have a solution for this issue, its just in need of more  
volunteers to contribute working test environments - its the Python  
buildbot.  The current master is here:

http://pylonshq.com:8012/

As you can see on the far right, someone at some point contributed an  
MS-SQL bot but they never got it working (and oddly enough we don't  
even know whos server that is - Ben Bangert administrates the build  
master and he said he didn't know whose it was).   The unit tests need  
lots of tweaks so that they all pass for MS-SQL as well as for Oracle,  
including marking tests as unsupported, adding ORDER BYs where needed,  
ensuring all tables have Sequences available, etc.   Michael Trier has  
been working on the MS-SQL side but he certainly could use more help.

So again its something we really need, and there's a clear path, but  
its just a ton of work.


--~--~-~--~~~---~--~~
You received this message because 

[sqlalchemy] Re: SQLAlchemy 0.5.0rc3 Released

2008-11-08 Thread Michael Bayer


On Nov 8, 2008, at 1:44 AM, Eric Ongerth wrote:


 Mike.  You have absolutely perfect spelling.  Better than 99% of the
 population.  But there is just this one, and only one, English word
 that you spell strangely.

there was another, which was the ensure/insure confusion.  J Ellis  
slapped me down a long time ago on that one and I've since stopped  
selling insurance.

 You consistently spell propagate as
 propigate.   Is there any way we can get an i/a switch in there?

ah sure someone needs to just to a search n' replace and we can commit.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using deferred with declarative

2008-11-08 Thread Michael Bayer


On Nov 8, 2008, at 12:32 PM, Allen Bierbaum wrote:


 We have been using the declarative successfully in our codebase for a
 couple months now with 0.4.x, but we have just run into a problem.

 We have a table we we want to map using declarative but we want to
 have one of the columns be deferred because it contains binary data.
 Unfortunately we can't figure out how to do this.

 We tried something like this:

 class Person(Base):
__tablename__ = 'people'
id = Column('id', Integer, primary_key=True)
data = Column('data', Binary)
__mapper_args__ = {'properties' : {'data' : sa.orm.deferred(data)}

 The problem is that when the declarative metaclass assembles the
 arguments to pass to the mapper creation method, this causes two
 arguments of name 'properties' to be passed into the call.  (see
 DeclarativeMeta.__init__)

 - Is this a bug in the implementation of declarative?  (perhaps the
 code in DeclarativeMeta.__init__ should look for
 __mapper_args__['properties'] and merge it with the internally created
 properties)
 - Or is there some other way to use deferred columns with declarative?

The names you declare in your Person class *are* the 'properties' that  
get sent to the mapper.  A little bit of inspection on them also  
derives the Columns used for the Table.  I use deferred all the time  
with declarative and its a joy, just say:

class Person(Base):
data = deferred(Column(Binary))

note the Column name isn't needed either, declarative assigns those  
for you.  I'm using 0.5 but the above should work with 0.4 as well.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to handle using master and number of slaves

2008-11-08 Thread Mike Bernson

I am looking at how to have 2 engines a session.

I want to be able to switch between engine used based on
if any writes will be happening to the data being read. I
can tell if any data will be modify by if the transaction
started by issues a begin.

I want to setup a number of server doing Replication. I need
to have a master and a number of slaves.

In the code I always start transaction that will be doing write with begin. I 
would like to
have the session have 2 engines. The first engine is used outside of any 
transaction that
have not started with a begin. This would allow all reads to use a slave mysql 
server and
keep the records in the session identify map. The second engine would be used 
for writes.
This engine is selected in the transaction started with a begin. The rows are 
read with
select for update. These rows also would be session identify map. When flushing 
modified
data it need to be flushed to master.






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] problem with join, count on 0.5.0rc3

2008-11-08 Thread Greg


This following request works fine and produce the result I was
expecting session.query(UserRss, Rss, Item).join([Rss, Item]). But
count doesn't work. Is it a bug, or did I miss something ?

 str(session.query(UserRss, Rss, Item).join([Rss, Item]).count())

Traceback (most recent call last):
  File console, line 1, in module
  File /opt/local/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-
py2.5.egg/sqlalchemy/orm/query.py, line 1251, in count
return self._col_aggregate(sql.literal_column('1'),
sql.func.count,
nested_cols=list(self._only_mapper_zero().primary_key))
  File /opt/local/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-
py2.5.egg/sqlalchemy/orm/query.py, line 241, in _only_mapper_zero
raise sa_exc.InvalidRequestError(This operation requires a Query
against a single mapper.)
InvalidRequestError: This operation requires a Query against a single
mapper.

Thanks.
Greg


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to handle using master and number of slaves

2008-11-08 Thread Michael Bayer


On Nov 8, 2008, at 3:07 PM, Mike Bernson wrote:


 I am looking at how to have 2 engines a session.

 I want to be able to switch between engine used based on
 if any writes will be happening to the data being read. I
 can tell if any data will be modify by if the transaction
 started by issues a begin.

 I want to setup a number of server doing Replication. I need
 to have a master and a number of slaves.

 In the code I always start transaction that will be doing write with  
 begin. I would like to
 have the session have 2 engines. The first engine is used outside of  
 any transaction that
 have not started with a begin. This would allow all reads to use a  
 slave mysql server and
 keep the records in the session identify map. The second engine  
 would be used for writes.
 This engine is selected in the transaction started with a begin. The  
 rows are read with
 select for update. These rows also would be session identify map.  
 When flushing modified
 data it need to be flushed to master.


set session.bind = any engine as needed.I recommend a decorator  
for the general use case, such as

@decorator
def with_master(fn, *arg, **kw):
 sess = Session()
 sess.bind = the_master
 try:
 r = fn(*arg, **kw)
 finally:
 sess.bind = not_the_master
 return r



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with join, count on 0.5.0rc3

2008-11-08 Thread Michael Bayer

it doesn't know which class on the left side you'd like to join  
from.   so its looking for:

sess.query(A, B, C).join((B, A.bs), (C, B.cs))

alternatively, instead of A.bs etc. you can spell out the join  
condition such as A.b_id==B.id in each tuple.


On Nov 8, 2008, at 3:03 PM, Greg wrote:



 This following request works fine and produce the result I was
 expecting session.query(UserRss, Rss, Item).join([Rss, Item]). But
 count doesn't work. Is it a bug, or did I miss something ?

 str(session.query(UserRss, Rss, Item).join([Rss, Item]).count())

 Traceback (most recent call last):
  File console, line 1, in module
  File /opt/local/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-
 py2.5.egg/sqlalchemy/orm/query.py, line 1251, in count
return self._col_aggregate(sql.literal_column('1'),
 sql.func.count,
 nested_cols=list(self._only_mapper_zero().primary_key))
  File /opt/local/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-
 py2.5.egg/sqlalchemy/orm/query.py, line 241, in _only_mapper_zero
raise sa_exc.InvalidRequestError(This operation requires a Query
 against a single mapper.)
 InvalidRequestError: This operation requires a Query against a single
 mapper.

 Thanks.
 Greg


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with join, count on 0.5.0rc3

2008-11-08 Thread Michael Bayer

oh sorry, also count() is meant to count instances of a single kind of  
object.  So in fact you should be saying:

session.query(UserRss).join(Rss, item).count()


On Nov 8, 2008, at 3:03 PM, Greg wrote:



 This following request works fine and produce the result I was
 expecting session.query(UserRss, Rss, Item).join([Rss, Item]). But
 count doesn't work. Is it a bug, or did I miss something ?

 str(session.query(UserRss, Rss, Item).join([Rss, Item]).count())

 Traceback (most recent call last):
  File console, line 1, in module
  File /opt/local/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-
 py2.5.egg/sqlalchemy/orm/query.py, line 1251, in count
return self._col_aggregate(sql.literal_column('1'),
 sql.func.count,
 nested_cols=list(self._only_mapper_zero().primary_key))
  File /opt/local/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-
 py2.5.egg/sqlalchemy/orm/query.py, line 241, in _only_mapper_zero
raise sa_exc.InvalidRequestError(This operation requires a Query
 against a single mapper.)
 InvalidRequestError: This operation requires a Query against a single
 mapper.

 Thanks.
 Greg


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with join, count on 0.5.0rc3

2008-11-08 Thread Michael Bayer

here's the new error message in the latest trunk:

InvalidRequestError: Can't issue count() for multiple types of objects  
or columns.  Construct the Query against a single element as the thing  
to be counted, or for an actual row count use  
Query(func.count(somecolumn)) or query.values(func.count(somecolumn))  
instead.



On Nov 8, 2008, at 4:03 PM, Michael Bayer wrote:


 oh sorry, also count() is meant to count instances of a single kind of
 object.  So in fact you should be saying:

 session.query(UserRss).join(Rss, item).count()


 On Nov 8, 2008, at 3:03 PM, Greg wrote:



 This following request works fine and produce the result I was
 expecting session.query(UserRss, Rss, Item).join([Rss, Item]). But
 count doesn't work. Is it a bug, or did I miss something ?

 str(session.query(UserRss, Rss, Item).join([Rss, Item]).count())

 Traceback (most recent call last):
 File console, line 1, in module
 File /opt/local/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-
 py2.5.egg/sqlalchemy/orm/query.py, line 1251, in count
   return self._col_aggregate(sql.literal_column('1'),
 sql.func.count,
 nested_cols=list(self._only_mapper_zero().primary_key))
 File /opt/local/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-
 py2.5.egg/sqlalchemy/orm/query.py, line 241, in _only_mapper_zero
   raise sa_exc.InvalidRequestError(This operation requires a Query
 against a single mapper.)
 InvalidRequestError: This operation requires a Query against a single
 mapper.

 Thanks.
 Greg





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to handle using master and number of slaves

2008-11-08 Thread Mike Bernson

Glad to known that it that simple.

Thanks

Michael Bayer wrote:
 
 On Nov 8, 2008, at 3:07 PM, Mike Bernson wrote:
 
 I am looking at how to have 2 engines a session.

 I want to be able to switch between engine used based on
 if any writes will be happening to the data being read. I
 can tell if any data will be modify by if the transaction
 started by issues a begin.

 I want to setup a number of server doing Replication. I need
 to have a master and a number of slaves.

 In the code I always start transaction that will be doing write with  
 begin. I would like to
 have the session have 2 engines. The first engine is used outside of  
 any transaction that
 have not started with a begin. This would allow all reads to use a  
 slave mysql server and
 keep the records in the session identify map. The second engine  
 would be used for writes.
 This engine is selected in the transaction started with a begin. The  
 rows are read with
 select for update. These rows also would be session identify map.  
 When flushing modified
 data it need to be flushed to master.
 
 
 set session.bind = any engine as needed.I recommend a decorator  
 for the general use case, such as
 
 @decorator
 def with_master(fn, *arg, **kw):
  sess = Session()
  sess.bind = the_master
  try:
  r = fn(*arg, **kw)
  finally:
  sess.bind = not_the_master
  return r
 
 
 
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema inspection api

2008-11-08 Thread Randall Smith
Michael Bayer wrote:

 The structure of the API would drive the current reflection API to  
 become more componentized.  What we see as a need on the public  
 refleciton API side would drive the currently monolithic reflection  
 methods to be changed.   The current reflecttable() methods in turn  
 would ideally move *out* of the dialects, and the Table(autoload=True)  
 functionality would simply call up an Inspector and use that.  So the  
 ultimate outcome is that the individual reflecttable methods go  
 away, and dialects implement a larger set of simpler methods.
 

I started by factoring out components for reflecttable.  So now there 
are methods get_columns, get_primary_keys, get_foreign_keys, get_indexes 
and get_views.  get_columns, get_primary_keys, and get_foreign_keys are 
the methods used by reflecttable (and the ones I actually implemented).

There is one thing I did that will slightly degrade performance. I 
created a method in postgres.py __get_table_oid that several other 
methods use.  This value could be cached at the beginning of 
reflecttable, but I didn't won't to address threading issues.

I made a local branch called reflection and have attached the files and 
diffs for engine/base.py and databases/postgres.py at revision 5262.  I 
did a few real world tests, but not the automated tests.  I don't know 
how difficult they are to set up.  That'll be the next thing I do. 
Mainly I just want to get feedback on the API and if I'm going about the 
implementation right.

--Randall

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



base.py.diff.gz
Description: GNU Zip compressed data


base.py.gz
Description: GNU Zip compressed data


postgres.py.diff.gz
Description: GNU Zip compressed data


postgres.py.gz
Description: GNU Zip compressed data


[sqlalchemy] Re: SQLAlchemy 0.5.0rc3 Released

2008-11-08 Thread empty

 there was another, which was the ensure/insure confusion.  J Ellis  
 slapped me down a long time ago on that one and I've since stopped  
 selling insurance.

Found a few more and corrected.

  You consistently spell propagate as
  propigate.   Is there any way we can get an i/a switch in there?

Cleanup in r5267.

Michael
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql reflection fk issue

2008-11-08 Thread empty

  When reflecting a MSSQL table with a foreign key, the referenced table
  fails to load with the error:

  sqlalchemy.exc.NoSuchTableError: [referenced_table]


Give the latest trunk a try and let me know.  It should have been
corrected in r5266.

Thanks,

Michael
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---