Re: [sqlalchemy] Automap sometimes does not create the collection for table classes

2014-10-24 Thread Serrano Pereira
Michael,

Thank you for the explanation. And thank you for this awesome tool!
Calling sqlalchemy.orm.configure_mappers() indeed solves the problem. I
am surprised this function isn't highlighted in the Mapper
Configuration section of the documentation, or even mentioned on the
Automap page. I wouldn't have figured this out just by reading the
documentation. Or I missed something..

Regards,
Serrano

On 10/20/2014 07:12 PM, Michael Bayer wrote:
 ...
 
 When taxa_collection is the backref, Photo.taxa_collection does not
 exist until the mappers configure themselves.So after prepare() I’d
 advise to call sqlalchemy.orm.configure_mappers() which allows this
 example to work in all cases.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Serializing sqlalchemy declarative instances with yaml

2014-10-24 Thread Jonathan Vanasco

Usually for this sort of stuff, I serialize the object's data into a JSON 
dict ( object columns to JSON dict, object relations to a dict, list of 
dicts, or reference to another object).  ( Custom dump/load is needed to 
handle Timestamp, Floats, etc).  You might be able to iterate over the data 
in YAML and not require custom encoding/decoding.  When I need to treat the 
json data as objects, I'll load them into a custom dict class that will 
treat attributes as keys.  

The downside of this is that you don't have all the SqlAlchemy relational 
stuff or any ancillary methods (though they can be bridged in with more 
work).  The benefit though is that you can get a nearly 1:1 parity between 
the core needs without much more work.  When using a read only context, 
you can flip between SqlAlchemy objects and dicts.  If you need to use the 
SqlAlchemy model itself, you could load the column/relationship data into 
it manually.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Serializing sqlalchemy declarative instances with yaml

2014-10-24 Thread Peter Waller
Well I was hoping to just use yaml since yaml understands when two
objects refer to the same underlying object. That means you don't have to
write any logic to de-duplicate objects through relationships, etc.

Since json doesn't have the notion of referencing, that doesn't seem
straightforward there.

I was also hoping to just use yaml to avoid writing custom dumping code,
since it seems in general like a useful capability. So I may yet try and
find the underlying bug and fix it.

On 24 October 2014 15:29, Jonathan Vanasco jvana...@gmail.com wrote:


 Usually for this sort of stuff, I serialize the object's data into a JSON
 dict ( object columns to JSON dict, object relations to a dict, list of
 dicts, or reference to another object).  ( Custom dump/load is needed to
 handle Timestamp, Floats, etc).  You might be able to iterate over the data
 in YAML and not require custom encoding/decoding.  When I need to treat
 the json data as objects, I'll load them into a custom dict class that will
 treat attributes as keys.

 The downside of this is that you don't have all the SqlAlchemy relational
 stuff or any ancillary methods (though they can be bridged in with more
 work).  The benefit though is that you can get a nearly 1:1 parity between
 the core needs without much more work.  When using a read only context,
 you can flip between SqlAlchemy objects and dicts.  If you need to use the
 SqlAlchemy model itself, you could load the column/relationship data into
 it manually.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: azure sqlalchemy

2014-10-24 Thread Andrew Kittredge
I have, I wrote a bit about my experience 
@ 
https://social.msdn.microsoft.com/Forums/en-US/64517130-40ce-4912-80a2-844d3c3ce8b9/sqlalchemy-working-with-azure-sql-database?forum=ssdsgetstarted.

Andrew

On Wednesday, October 30, 2013 11:41:59 AM UTC-4, Lorenzo Lee wrote:

 Curious if anyone has done this yet? I have a need. Thanks!

 Lorenzo

 On Monday, May 21, 2012 8:09:12 AM UTC-5, Damian wrote:

 Hi, 

 Has anyone used sqlalchemy and azure at any point?  I may need to work 
 with azure shortly... 

 Thanks! 

 Damian 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Serializing sqlalchemy declarative instances with yaml

2014-10-24 Thread Jonathan Vanasco

On Friday, October 24, 2014 10:39:43 AM UTC-4, Peter Waller wrote:

 I was also hoping to just use yaml to avoid writing custom dumping code, 
 since it seems in general like a useful capability. So I may yet try and 
 find the underlying bug and fix it.


It might not be a bug, and the effect of an implementation feature of 
SqlAlchemy.  I tried (naively) playing around with your example, and 
thought back to how SqlAlchemy accomplishes much of it's magic by creating 
custom comparators (and other private methods) on the classes and columns.  

Playing around with it, the problem seems to be with the SqlAlchemy 
object's __reduce_ex__ method. If you simply use __reduce__ in yaml, it 
works.  I couldn't figure out what Foo inherits __reduce_ex__ from , or if 
any of the columns have it.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Serializing sqlalchemy declarative instances with yaml

2014-10-24 Thread Peter Waller
The oddity is that calling `__reduce_ex__` on the instance is fine, but on
the class it is not. When serialising a declarative class it finds itself
serialising the class type, which fails. This actually fails for the
`object`, too (see below).

So I think what's happening is that serialisation fails because
`_sa_instance_state`
(somewhere inside it) contains a class. This is probably a yaml bug, then.

In [1]: object().__reduce_ex__(2)
Out[1]: (function copy_reg.__newobj__, (object,), None, None, None)

In [2]: object.__reduce_ex__(2)
---
TypeError Traceback (most recent call last)
ipython-input-1-eebec0cadfee in module()
 1 object.__reduce_ex__(2)

/usr/lib/python2.7/copy_reg.pyc in _reduce_ex(self, proto)
 68 else:
 69 if base is self.__class__:
--- 70 raise TypeError, can't pickle %s objects %
base.__name__
 71 state = base(self)
 72 args = (self.__class__, base, state)

TypeError: can't pickle int objects


On 24 October 2014 17:55, Jonathan Vanasco jvana...@gmail.com wrote:


 On Friday, October 24, 2014 10:39:43 AM UTC-4, Peter Waller wrote:

 I was also hoping to just use yaml to avoid writing custom dumping
 code, since it seems in general like a useful capability. So I may yet try
 and find the underlying bug and fix it.


 It might not be a bug, and the effect of an implementation feature of
 SqlAlchemy.  I tried (naively) playing around with your example, and
 thought back to how SqlAlchemy accomplishes much of it's magic by creating
 custom comparators (and other private methods) on the classes and columns.

 Playing around with it, the problem seems to be with the SqlAlchemy
 object's __reduce_ex__ method. If you simply use __reduce__ in yaml, it
 works.  I couldn't figure out what Foo inherits __reduce_ex__ from , or if
 any of the columns have it.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Simplified Pyodbc Connect Questions

2014-10-24 Thread Lycovian
Thanks for the reply Mike.  I've been tracking SQLAlchemy for years now and 
it's probably some of the best code, docs, and support I've ever seen.

Yes, I have verified that pyodbc and the pooling flag work:
In [1]: import pyodbc
In [2]: pyodbc.pooling = False
In [3]: conn = pyodbc.connect
pyodbc.connect
In [3]: conn = pyodbc.connect('DSN=tdprod')
In [4]: conn.execute('select current_timestamp').fetchall()
Out[4]: [(datetime.datetime(2014, 10, 24, 9, 22, 34, 27), )]
In [5]: pyodbc.pooling = True
In [6]: conn.execute('select current_timestamp').fetchall()
Out[6]: [(datetime.datetime(2014, 10, 24, 9, 22, 46, 50), )]

Without pooling (and with core dump):
In [1]: import pyodbc
In [2]: conn = pyodbc.connect('DSN=tdprod')
Fatal Python error: Unable to set SQL_ATTR_CONNECTION_POOLING attribute.
Aborted (core dumped)


Since a generic pyodbc connection is out, can you comment on how I could 
set the pyodbc.pooling flag as part of a dialect?  To get me off the ground 
please see below for my bootstrap base.py and pyodbc.py for a Teradata 
dialect that could use pyodbc.  I have been unable to set pyodbc.pooling 
appropriately.  Feel free to comment additionally on the best way to create 
a minimal functional dialect that would pass a basic test harness.

base.py
import operator
import re

from sqlalchemy.sql import compiler, expression, text, bindparam
from sqlalchemy.engine import default, base, reflection
from sqlalchemy import types as sqltypes
from sqlalchemy.sql import operators as sql_operators
from sqlalchemy import schema as sa_schema
from sqlalchemy import util, sql, exc

from sqlalchemy.types import CHAR, VARCHAR, TIME, NCHAR, NVARCHAR,\
TEXT, DATE, DATETIME, FLOAT, NUMERIC,\
BIGINT, INT, INTEGER, SMALLINT, BINARY,\
VARBINARY, DECIMAL, TIMESTAMP, Unicode,\
UnicodeText, REAL

RESERVED_WORDS = set([])

class TeradataTypeCompiler(compiler.GenericTypeCompiler):
pass

class TeradataInspector(reflection.Inspector):
pass

class TeradataExecutionContext(default.DefaultExecutionContext):
pass

class TeradataSQLCompiler(compiler.SQLCompiler):
pass

class TeradataDDLCompiler(compiler.DDLCompiler):
pass

class TeradataIdentifierPreparer(compiler.IdentifierPreparer):
reserved_words = RESERVED_WORDS

class TeradataDialect(default.DefaultDialect):
pass
/base.py

pyodbc.py
import pyodbc

from .base import TeradataDialect, TeradataExecutionContext
from sqlalchemy.connectors.pyodbc import PyODBCConnector

pyodbc.pooling = False

class TeradataExecutionContext_pyodbc(TeradataExecutionContext):
pass

class TeradataDialect_pyodbc(PyODBCConnector, TeradataDialect):
execution_ctx_cls = TeradataExecutionContext_pyodbc

pyodbc_driver_name = 'Teradata'

def initialize(self, connection):

# Teradata requires pooling off for pyodbc
super(TeradataDialect_pyodbc, self).initialize(connection)
self.dbapi.pooling = False

dialect = TeradataDialect_pyodbc
/pyodbc.py
 
If there is any way to make this simpler please let me know, but in 
particular for this very simple dialect I need to be able to set the 
pyodbc.pooling attribute to False.



On Thursday, October 23, 2014 9:38:30 PM UTC-7, Michael Bayer wrote:


 On Oct 23, 2014, at 2:50 PM, Lycovian mfwi...@gmail.com javascript: 
 wrote:

 Is there a way to use a pyodbc connection without a dialect? 


 there is not.   the dialect is responsible for formulating SQL of the 
 format that the database understands as well as dealing with idiosyncrasies 
 of the DBAPI driver (of which pyodbc has many, many, many, which are also 
 specific to certain databases).



 Barring that working which seems unlikely since I can't find any working 
 examples, I have started stubbing out a very simple Teradata dialect but I 
 can't figure out how to manually set pyodbc.pooling = False.  This is 
 required as the TD ODBC driver will core dump on connect if this isn't set. 
  I've tried the following in the my pyodbc.py my dialect but on testing it 
 core dumps indicating the value isn't being set.

 Here is the pyodbc.py for my TD dialect.  I'm trying to control pooling in 
 two different ways in this example but neither works:





 have you tried talking to your database using just pyodbc directly?  does 
 this flag even work ?




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Simplified Pyodbc Connect Questions

2014-10-24 Thread Michael Bayer
You probably need to hit that flag early before anything connects.  A safe 
place would be in the dbapi() accessor itself, or in the __init__ method of the 
dialect class.

Sent from my iPhone

 On Oct 24, 2014, at 1:32 PM, Lycovian mfwil...@gmail.com wrote:
 
 Thanks for the reply Mike.  I've been tracking SQLAlchemy for years now and 
 it's probably some of the best code, docs, and support I've ever seen.
 
 Yes, I have verified that pyodbc and the pooling flag work:
 In [1]: import pyodbc
 In [2]: pyodbc.pooling = False
 In [3]: conn = pyodbc.connect
 pyodbc.connect
 In [3]: conn = pyodbc.connect('DSN=tdprod')
 In [4]: conn.execute('select current_timestamp').fetchall()
 Out[4]: [(datetime.datetime(2014, 10, 24, 9, 22, 34, 27), )]
 In [5]: pyodbc.pooling = True
 In [6]: conn.execute('select current_timestamp').fetchall()
 Out[6]: [(datetime.datetime(2014, 10, 24, 9, 22, 46, 50), )]
 
 Without pooling (and with core dump):
 In [1]: import pyodbc
 In [2]: conn = pyodbc.connect('DSN=tdprod')
 Fatal Python error: Unable to set SQL_ATTR_CONNECTION_POOLING attribute.
 Aborted (core dumped)
 
 
 Since a generic pyodbc connection is out, can you comment on how I could 
 set the pyodbc.pooling flag as part of a dialect?  To get me off the ground 
 please see below for my bootstrap base.py and pyodbc.py for a Teradata 
 dialect that could use pyodbc.  I have been unable to set pyodbc.pooling 
 appropriately.  Feel free to comment additionally on the best way to create a 
 minimal functional dialect that would pass a basic test harness.
 
 base.py
 import operator
 import re
 
 from sqlalchemy.sql import compiler, expression, text, bindparam
 from sqlalchemy.engine import default, base, reflection
 from sqlalchemy import types as sqltypes
 from sqlalchemy.sql import operators as sql_operators
 from sqlalchemy import schema as sa_schema
 from sqlalchemy import util, sql, exc
 
 from sqlalchemy.types import CHAR, VARCHAR, TIME, NCHAR, NVARCHAR,\
 TEXT, DATE, DATETIME, FLOAT, NUMERIC,\
 BIGINT, INT, INTEGER, SMALLINT, BINARY,\
 VARBINARY, DECIMAL, TIMESTAMP, Unicode,\
 UnicodeText, REAL
 
 RESERVED_WORDS = set([])
 
 class TeradataTypeCompiler(compiler.GenericTypeCompiler):
 pass
 
 class TeradataInspector(reflection.Inspector):
 pass
 
 class TeradataExecutionContext(default.DefaultExecutionContext):
 pass
 
 class TeradataSQLCompiler(compiler.SQLCompiler):
 pass
 
 class TeradataDDLCompiler(compiler.DDLCompiler):
 pass
 
 class TeradataIdentifierPreparer(compiler.IdentifierPreparer):
 reserved_words = RESERVED_WORDS
 
 class TeradataDialect(default.DefaultDialect):
 pass
 /base.py
 
 pyodbc.py
 import pyodbc
 
 from .base import TeradataDialect, TeradataExecutionContext
 from sqlalchemy.connectors.pyodbc import PyODBCConnector
 
 pyodbc.pooling = False
 
 class TeradataExecutionContext_pyodbc(TeradataExecutionContext):
 pass
 
 class TeradataDialect_pyodbc(PyODBCConnector, TeradataDialect):
 execution_ctx_cls = TeradataExecutionContext_pyodbc
 
 pyodbc_driver_name = 'Teradata'
 
 def initialize(self, connection):
 
 # Teradata requires pooling off for pyodbc
 super(TeradataDialect_pyodbc, self).initialize(connection)
 self.dbapi.pooling = False
 
 dialect = TeradataDialect_pyodbc
 /pyodbc.py
  
 If there is any way to make this simpler please let me know, but in 
 particular for this very simple dialect I need to be able to set the 
 pyodbc.pooling attribute to False.
 
 
 
 On Thursday, October 23, 2014 9:38:30 PM UTC-7, Michael Bayer wrote:
 
 On Oct 23, 2014, at 2:50 PM, Lycovian mfwi...@gmail.com wrote:
 
 Is there a way to use a pyodbc connection without a dialect?
 
 there is not.   the dialect is responsible for formulating SQL of the format 
 that the database understands as well as dealing with idiosyncrasies of the 
 DBAPI driver (of which pyodbc has many, many, many, which are also specific 
 to certain databases).
 
 
 
 Barring that working which seems unlikely since I can't find any working 
 examples, I have started stubbing out a very simple Teradata dialect but I 
 can't figure out how to manually set pyodbc.pooling = False.  This is 
 required as the TD ODBC driver will core dump on connect if this isn't set. 
  I've tried the following in the my pyodbc.py my dialect but on testing it 
 core dumps indicating the value isn't being set.
 
 Here is the pyodbc.py for my TD dialect.  I'm trying to control pooling in 
 two different ways in this example but neither works:
 
 
 
 
 have you tried talking to your database using just pyodbc directly?  does 
 this flag even work ?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at 

[sqlalchemy] Print Query (literal_binds)

2014-10-24 Thread Lycovian
I'm trying to get the literal query (with the binds inline) for a SQLite 
connection per the docs:
http://docs.sqlalchemy.org/en/latest/faq.html#faq-sql-expression-string

Here is the statement:
print(stmt.compile(SQL_ENGINE,compile_kwargs={literal_binds: True}))

I have a statement object, with parameters bound to it (INSERT in this 
case) but when I print it it I don't see the binds inline, just the :bind 
variables printed.  I had thought using this compile_kwargs dictionary I 
would see my string and integer values in the query that was printed.  Am I 
misunderstanding this?

In case you are curious why I would need this I'm trying to use SQLAlchemy 
to write queries to feed to pyodbc for a database that doesn't have a 
dialect.  The database in question is ANSI-99 compliant though so if I 
could get SQLAlchemy to write my queries for me (with the binds inline) I 
had hoped to simply execute them against the pyodbc driver.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Print Query (literal_binds)

2014-10-24 Thread Jonathan Vanasco
I believe you have to specify a dialect in order to get the binds presented 
in the right format.

This is the utility function i use for debugging.  you could easily adapt 
it to return a sqlite statement.  

https://gist.github.com/jvanasco/69daa58aeb0e921cdbbe

that being said -- I don't think you will be able to (reliablly) do what 
you want in the current release.  

The values for LIMIT/OFFSET will not appear inline when you compile a 
statement (through at least 0.9.8).  i think there are some issues with 
certain CTEs as well.  To my knowledge, both issues are addressed in the 
1.0 release.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] eager loading the presence of a relationship ?

2014-10-24 Thread Jonathan Vanasco
does anyone know if its possible to implement some form of eagerloading or 
class attribute where only the presence (or first record, or count) of 
relations are emitted?

I have a few queries where i'm loading 100+ rows, but I only need to know 
whether or not any entries for the relationship exists.  

the best thing I've been able to come up with, is memoizing the count or a 
bool onto an object as a property:

@property
def count_RELATION(self):
if self._count_RELATION is None:
_sess = sa.inspect(self).session
self._count_RELATION = 
_sess.query(self.__class__).with_parent(self, RELATION).count()
return self._count_RELATION
_count_RELATION = None

@property
def has_RELATION(self):
if self.has_RELATION is None:
_sess = sa.inspect(self).session
self.has_RELATION = True if 
_sess.query(self.__class__).with_parent(self, RELATION).first() else False
return self.has_RELATION
has_RELATION = None

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] eager loading the presence of a relationship ?

2014-10-24 Thread Michael Bayer

 On Oct 24, 2014, at 9:06 PM, Jonathan Vanasco jvana...@gmail.com wrote:
 
 does anyone know if its possible to implement some form of eagerloading or 
 class attribute where only the presence (or first record, or count) of 
 relations are emitted?
 
 I have a few queries where i'm loading 100+ rows, but I only need to know 
 whether or not any entries for the relationship exists.  
 
 the best thing I've been able to come up with, is memoizing the count or a 
 bool onto an object as a property:

the “count of objects” column property?  
http://docs.sqlalchemy.org/en/rel_0_9/orm/mapper_config.html#using-column-property
 
http://docs.sqlalchemy.org/en/rel_0_9/orm/mapper_config.html#using-column-property





 
 @property
 def count_RELATION(self):
 if self._count_RELATION is None:
 _sess = sa.inspect(self).session
 self._count_RELATION = 
 _sess.query(self.__class__).with_parent(self, RELATION).count()
 return self._count_RELATION
 _count_RELATION = None
 
 @property
 def has_RELATION(self):
 if self.has_RELATION is None:
 _sess = sa.inspect(self).session
 self.has_RELATION = True if 
 _sess.query(self.__class__).with_parent(self, RELATION).first() else False
 return self.has_RELATION
 has_RELATION = None
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com 
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com 
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy 
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout 
 https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.