Re: [sqlalchemy] Postgres not(any_) issue within recursive CTE

2016-07-20 Thread Mike Bayer



On 07/20/2016 07:53 PM, cledo...@twistbioscience.com wrote:

Hi,

I'm very glad to see the any_ operator fully supported in SqlAlchemy
1.1b2.

We'd like to use this operator to implement an efficient recursive CTE
(RCTE) that avoids cycles.  One way to get an RCTE to avoid cycles is to
maintain an array of visited nodes or links, then to check for
membership of the current node / link within that array as a recursion
stop condition.

We're using the following SqlAlchemy construct:
|
str(not_(Link.id ==any_(valid_parents.c.visited_ids)))
|

It seems to generate the following SQL:
|
link.id !=ANY (valid_parents.visited_ids)
|
(using str(q.statement.compile(dialect=postgresql.dialect(

That SQL is not sufficient to cause the RCTE to avoid cycles.  It
appears to just keep recursing.

With the following small adjustment to the generated SQL:
|
NOT link.id =ANY (valid_parents.visited_ids)
|
We successfully get the RCTE to avoid cycles.

Is there any way to force SqlAlchemy to use that style of negative
logic, so that Postgres picks up the NOT operator instead of != ?
(assuming that's the right solution of course!)



try calling self_group() on the expression before applying not_() around it.


>>> str(not_((column('foo') == any_(column('bar'))).self_group()))
'NOT (foo = ANY (bar))'





Here is some basic code:

|
classLink(db.Model):
__tablename__ ='link'
id =Column(Integer,primary_key=True)

valid_parents =db.query(array([-1]).label('visited_ids'))
rep =str(not_(Link.id ==any_(valid_parents.c.visited_ids)))
printrep
|


Thanks very much for any guidance, and thanks for all the great Postgres
support in the new version!


Charlie Ledogar

Twist Bioscience

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Postgres not(any_) issue within recursive CTE

2016-07-20 Thread cledogar
Hi,

I'm very glad to see the any_ operator fully supported in SqlAlchemy 1.1b2. 
 

We'd like to use this operator to implement an efficient recursive CTE 
(RCTE) that avoids cycles.  One way to get an RCTE to avoid cycles is to 
maintain an array of visited nodes or links, then to check for membership 
of the current node / link within that array as a recursion stop condition.

We're using the following SqlAlchemy construct:
str(not_(Link.id == any_(valid_parents.c.visited_ids)))

It seems to generate the following SQL:
link.id != ANY (valid_parents.visited_ids)
(using str(q.statement.compile(dialect=postgresql.dialect(

That SQL is not sufficient to cause the RCTE to avoid cycles.  It appears 
to just keep recursing.

With the following small adjustment to the generated SQL:
NOT link.id = ANY (valid_parents.visited_ids)
We successfully get the RCTE to avoid cycles.

Is there any way to force SqlAlchemy to use that style of negative logic, 
so that Postgres picks up the NOT operator instead of != ?   (assuming 
that's the right solution of course!)

Here is some basic code:

class Link(db.Model):
__tablename__ = 'link' 
id = Column(Integer, primary_key=True) 

valid_parents = db.query(array([-1]).label('visited_ids')) 
rep = str(not_(Link.id == any_(valid_parents.c.visited_ids))) 
print rep


Thanks very much for any guidance, and thanks for all the great Postgres 
support in the new version!


Charlie Ledogar

Twist Bioscience

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] automap_base from MySQL doesn't setup all tables in Base.classes but they are in metadata.tables

2016-07-20 Thread Mike Bayer



On 07/20/2016 04:44 PM, bkcsfi sfi wrote:

I have a legacy MySQL database that I am working with sqla
version 1.0.11 and MySQL-Python engine (just upgraded to 1.0.14, problem
persists)

I use automap_base and prepare with reflect=True

some of the tables in this database are association tables.  Those
tables do show up in metadata, e.g.

In [74]: Base.metadata.tables['TripManifests']
Out[74]: Table('TripManifests', MetaData(bind=None),
Column('trip_id', INTEGER(display_width=11),
ForeignKey(u'Trips.id'), table=, nullable=False),
Column('manifest_id', INTEGER(display_width=11),
table=, nullable=False), schema=None)



But the table isn't in Base.classes

In [75]: Base.classes.TripManifests
AttributeError: TripManifests



likely because these columns are not primary key columns.   SQLAlchemy 
ORM can't map a class to a table that has no primary key, and doesn't 
alternatively establish this via the "primary_key" parameter of mapper().







Since TripManifests is not Base.classes I'm not sure how to create an
ORM query using joins. I'd be ok with manually specifying the .join()
conditions if that would work, but I haven't seen an example of doing
that w/o using Base.classes

Alternatively I could try manually adding this class to Base but I
haven't been able to get that to work, does that need to be done before
or after prepare(reflect=True)?



you'd do it before.  The 
http://docs.sqlalchemy.org/en/latest/orm/extensions/automap.html?highlight=automap#using-automap-with-explicit-declarations 
shows an example of this.  In this case, you'd want to put 
__mapper_args__ = {"primary_key": [ ... cols .. ]} here, and i think 
those have to be the Column objects so you'd pretty much name the 
TripManifests class and include the two Columns fully.  or just stick 
primary_key=True on each of those Column objects.






Ultimately I would like to get away from using reflection. Does anyone
know of a tool that can reflect and then generate the declarative
classes and relationships as Python source.. which I could then hand-edit.


yes you can use sqlacodegen: https://pypi.python.org/pypi/sqlacodegen




Moving forward I could then use alembic to manage the DB schema.. Though
it looks like adding a column would require that I use alembic to update
the database itself, then I'd still have to edit the Python declaration
as well (assuming I didn't want to use reflection), but that's a
different discussion.


--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] automap_base from MySQL doesn't setup all tables in Base.classes but they are in metadata.tables

2016-07-20 Thread bkcsfi sfi
I have a legacy MySQL database that I am working with sqla version 1.0.11 
and MySQL-Python engine (just upgraded to 1.0.14, problem persists)

I use automap_base and prepare with reflect=True

some of the tables in this database are association tables.  Those tables 
do show up in metadata, e.g. 

In [74]: Base.metadata.tables['TripManifests']
> Out[74]: Table('TripManifests', MetaData(bind=None), Column('trip_id', 
> INTEGER(display_width=11), ForeignKey(u'Trips.id'), table=, 
> nullable=False), Column('manifest_id', INTEGER(display_width=11), 
> table=, nullable=False), schema=None)



But the table isn't in Base.classes

In [75]: Base.classes.TripManifests
> AttributeError: TripManifests



The TripManifests table joins the Manifests table to the Trips table, 
neither of which appear to show a fk to each or nor the TripManifests Table

In [80]: Base.metadata.tables['Trips'].foreign_keys
> Out[80]: {ForeignKey(u'Users.id'), ForeignKey(u'TripStatuses.id')}
> In [81]: Base.metadata.tables['Manifests'].foreign_keys
> Out[81]: 
> {ForeignKey(u'Users.id'),
>  ForeignKey(u'People.id'),
>  ForeignKey(u'Lists.id'),
>  ForeignKey(u'Equipment.id'),
>  ForeignKey(u'Equipment.id')}



Since TripManifests is not Base.classes I'm not sure how to create an ORM 
query using joins. I'd be ok with manually specifying the .join() 
conditions if that would work, but I haven't seen an example of doing that 
w/o using Base.classes

Alternatively I could try manually adding this class to Base but I haven't 
been able to get that to work, does that need to be done before or after 
prepare(reflect=True)?

Ultimately I would like to get away from using reflection. Does anyone know 
of a tool that can reflect and then generate the declarative classes and 
relationships as Python source.. which I could then hand-edit.

Moving forward I could then use alembic to manage the DB schema.. Though it 
looks like adding a column would require that I use alembic to update the 
database itself, then I'd still have to edit the Python declaration as well 
(assuming I didn't want to use reflection), but that's a different 
discussion.


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Problem with 'fetchone'

2016-07-20 Thread Tian JiaLin
Add some updates here. I found every time I got this problem, the affected
rows is 18446744073709552000.

On Tue, Jul 19, 2016 at 2:13 AM, Tian JiaLin 
wrote:

> Thanks for the reply, Mike.
>
> Actually there is no obvious errors, furthermore with a lower percentage
> occurrences. That's why I feel this is pretty hard to debug.
>
> And I did the similar thing like the snippets you provided
> to invalidate the broken connections.
>
> I'm not using any session variables in the code.
>
> I didn't try the NullPool implementation yet, because I think it should
> work like the "No Pool Version", which is working properly on my side. But
> I can try, maybe it will bring some clues.
>
> On Tue, Jul 19, 2016 at 1:47 AM, Mike Bayer 
> wrote:
>
>>
>>
>> On 07/18/2016 12:15 PM, Tian JiaLin wrote:
>>
>>> Hi Everyone,
>>>
>>> I have been using MySQL-Python for a long time. Recently I tried to
>>> integrated a connection pool which is based on SQLAlchemy, In terms of
>>> the legacy code, I'm using the raw_connection from the engine.
>>>
>>> Here is the sample code of two implementations:
>>>
>>>
>>> *No Pool Version:*
>>>
>>> *
>>> *
>>>
>>> connection = MySQLdb.connect(...)
>>>
>>> connection.autocommit(True)
>>> try:
>>> cursor = db.cursor()
>>> if not cursor.execute(sql, values) > 0:
>>> return None
>>> row = cursor.fetchone()
>>> finally:
>>> connection.close()
>>> return row[0]
>>>
>>> |
>>> |
>>>
>>> *
>>> *
>>>
>>> *Pool Version:*
>>>
>>> *
>>> *
>>>
>>> pool = create_engine("mysql+mysqldb://...")
>>> connection = pool.raw_connection()
>>>
>>> connection.autocommit(True)
>>> try:
>>> cursor = db.cursor()
>>> if not cursor.execute(sql, values) > 0:
>>> return None
>>> row = cursor.fetchone()
>>> finally:
>>> connection.close()
>>> return row[0]
>>>
>>> |
>>> |
>>>
>>> *
>>> *The codes look similar except the way to obtain the connection. After
>>> using the pool version, sometimes(not every time, actually in my
>>> situation, it occurs with 0.01% of all db queries), the return value
>>> of |execute| method is great than 0 and the |fetchone| method will
>>> return None. I guess it may related to the connection reuse, but I have
>>> no idea of which part is going wrong. This will be happened with any
>>> kind of SQL, I don't think it related to any specific one, but I can put
>>> some examples here.
>>>
>>
>> I assume by "db.cursor" you meant, "connection.cursor".
>>
>> Are there any critical exceptions being thrown, like deadlock errors,
>> disconnect errors, etc. for which the connection is not being invalidated?
>>   SQLAlchemy's engine will invalidate the connection and the pool if we
>> encounter any of these error codes:
>>
>>   if isinstance(e, (self.dbapi.OperationalError,
>>   self.dbapi.ProgrammingError)):
>> return self._extract_error_code(e) in \
>> (2006, 2013, 2014, 2045, 2055)
>> elif isinstance(e, self.dbapi.InterfaceError):
>> # if underlying connection is closed,
>> # this is the error you get
>> return "(0, '')" in str(e)
>>
>> when you use engine.raw_connection(), none of the above checking occurs.
>> If you get any of the above and continue using the connection, it may fail
>> to function properly afterwards.  You would need to invalidate() that
>> connection (you can call this on the wrapper returned by raw_connection).
>>
>> Is there any use of SESSION level variables ?  (e.g. SET SESSION).
>>
>> Using pool_class=NullPool resolves ?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>>
>>> SQL Examples:
>>>
>>>
>>> 1.  SELECT uid FROM bookmarks WHERE object_id=?;
>>>
>>> 2.  SELECT last_activity_time FROM categories WHERE uid=? LIMIT 1;
>>>
>>>
>>> Here is my server setups:
>>>
>>>
>>> Apache + mod_wsgi (hybrid multi-process multi-threaded)
>>>
>>>
>>> Pool Settings:
>>>
>>>
>>> pool_size: 3
>>>
>>> max_overflow: 20
>>>
>>> pool_reset_on_return: none (also tried rollback, but still got the
>>> errors)
>>>
>>> pool_recycle: 3600
>>>
>>>
>>> MySQL:
>>>
>>>
>>> version 5.7.11
>>>
>>>
>>> I'm using AWS RDS. Basically I'm using the default parameter groups from
>>> the RDS with some small changes like max_connections and sync_binlog. No
>>> sure which part is helpful to diagnose the problem.
>>>
>>>
>>> I have been working on this problem for one week without any
>>> progress. Does anyone have some ideas what gonna be the potential reason
>>> of this problem?
>>>
>>>
>>> Thanks!
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sqlalchemy+unsubscr...@googlegroups.com
>>> .
>>> To post to this group, send email to sqlalchemy@googlegroups.com
>>> .
>>> Visit this group 

Re: [sqlalchemy] Combining yield_per and eager loading

2016-07-20 Thread Martijn van Oosterhout
Ok, so this is what I have for today. It works, and handles all kinds of 
corner cases and yet it's not quite what I want. It does everything as a 
joinedload. It's much easier to use now though.

You can do things like:

q = Session.query(Animal)

for animal in yielded_load(q, (joinedload(Animal.owner).joinedload(Human.
family),
   joinedload(Animal.species).joinedload(Species
.phylum)):
do_something(animal)

It says joinedload() but it doesn't actually pay attention to that, it just 
uses it to determine the path. It would be really nice to be able to 
specify that some things should be fetched using subqueryload(), but that 
would require unpacking/manipulating the Load objects and I don't think 
there's a supported interface for that. Additionally, it would be nice if 
could notice that paths share a common prefix and only fetch those once. 
Still, for the amount of code it's pretty effective.

from itertools import groupby, islice
from sqlalchemy.orm import attributes, Load, aliased
from sqlalchemy import tuple_


def yielded_load(query, load_options, N=1000):
# Note: query must return only a single object (for now anyway)
main_query = query.yield_per(N)

main_res = iter(main_query)

while True:
# Fetch block of results from query
objs = list(islice(main_res, N))

if not objs:
break

for load_option in load_options:
# Get path of attributes to follow
path = load_option.path
pk = path[0].prop.parent.primary_key

# Generate query that joins against original table
child_q = main_query.session.query().order_by(*pk)

for i, attr in enumerate(path):
if i == 0:
# For the first relationship we add the target and the 
pkey columns
# Note: add_columns() doesn't work here? 
with_entities() does
next_table = attr.prop.target
child_q = child_q.join(next_table, attr)
child_q = child_q.with_entities(attr.prop.mapper).
add_columns(*pk)
if attr.prop.order_by:
child_q = child_q.order_by(*attr.prop.order_by)
opts = Load(attr.prop.mapper)
else:
# For all relationships after the first we can use 
contains_eager
# Note: The aliasing is to handle cases where the 
relationships loop
next_table = aliased(attr.prop.target)
child_q = child_q.join(next_table, attr, isouter=True)
opts = opts.contains_eager(attr, alias=next_table)

child_q = child_q.options(opts)

keys = [[getattr(obj, col.key) for col in pk] for obj in objs]

child_q = child_q.filter(tuple_(*pk).in_(keys))

# Here we use the fact that the first column is the target 
object
collections = dict((k, [r[0] for r in v]) for k, v in groupby(
child_q,
lambda x: tuple([getattr(x, c.key) for c in pk])
))

for obj in objs:
# We can traverse many-to-one and one-to-many
if path[0].prop.uselist:
attributes.set_committed_value(
obj,
path[0].key,
collections.get(
tuple(getattr(obj, c.key) for c in pk),
())
)
else:
attributes.set_committed_value(
obj,
path[0].key,
collections.get(
tuple(getattr(obj, c.key) for c in pk),
[None])[0]
)

for obj in objs:
yield obj


-- 
Martijn

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Error IM001 reflecting table

2016-07-20 Thread Angelo Bulone
Ok, I will do a try with another python library. I would prefer  to 
consider to change sql driver only as  last chance.

Thank you!

Il giorno martedì 19 luglio 2016 17:16:21 UTC+2, Mike Bayer ha scritto:
>
>
>
> On 07/19/2016 04:50 AM, Angelo Bulone wrote: 
> > first of all, sorry if I'm not writing in the right place or I'm not 
> > providing enough info about the issue. 
> > 
> > Using SQL Alchemy, with pyodbc. I'm trying to reflect a table. When I 
> > try to do that, i get this message 
> > 
> > DBAPIError: (pyodbc.Error) ('IM001', '[IM001] [unixODBC][Driver 
> > Manager]Driver does not support this function (0) (SQLNumParams)') 
> > [SQL: u'SELECT [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], 
> > [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], 
> > [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].[ORDINAL_POSITION], 
> > [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH], 
> > [COLUMNS_1].[NUMERIC_PRECISION], [COLUMNS_1].[NUMERIC_SCALE], 
> > [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].[COLLATION_NAME] \nFROM 
> > [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1] \nWHERE 
> > [COLUMNS_1].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND 
> > [COLUMNS_1].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max)) ORDER BY 
> > [COLUMNS_1].[ORDINAL_POSITION]'] [parameters: ('Order', 'dbo')] 
> > 
> > Here is the code.. 
> > 
> > |>>>fromsqlalchemy.orm.session importSession>>>fromsqlalchemy.schema 
> > importMetaData>>>importsqlalchemy asSQLA >>>eng 
> > =SQLA.create_engine(connection_string)>>>session 
> > =Session(eng.connect())>>>classDB:...pass...>>>db =DB()>>>db.session 
> > =session >>>db.engine =eng >>>db.metadata 
> > =MetaData(bind=db.engine,schema='dbo')>>>db.session.execute("select * 
> > from 
> > 
> information_schema.columns") 0x7f76eb770890t 
> > 
> =SQLA.Table('Order',db.metadata,autoload=True,extend_existing=True,autoload_with=db.engine)>>>Traceback(most
>  
>
> > recent call last):...File"",line 1,in...Seethe error 
> above| 
> > 
> > The code is running on a RHEL 7.x with unixODBC, Microsoft SQL Server 
> > Native clinet 11 for Linux. Python 2.7.11 
>
> this is likely a side effect of the Linux SQL Server client which is 
> very new and for which pyodbc was not originally developed.   I'd 
> recommend trying the FreeTDS ODBC driver, and assuming that works, the 
> problem has to do with pyodbc and/or the Microsoft driver. 
>
>
>
>
> > 
> > Here are the pip requirements 
> > 
> >   * - click (6.6) db-connection-maker (1.2.0) 
> >   * - Flask (0.11.1) 
> >   * - itsdangerous (0.24) 
> >   * - Jinja2 (2.8) 
> >   * - MarkupSafe (0.23) 
> >   * - pip (8.0.2) 
> >   * - pyaml (15.8.2) 
> >   * - pyodbc (3.0.10) 
> >   * - PyYAML (3.11) 
> >   * - setuptools (19.6.2) 
> >   * - simplejson (3.8.2) 
> >   * - SQLAlchemy (1.0.14) 
> >   * - Werkzeug (0.11.10) 
> > 
> > Note that the same code works without issues on windows. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > . 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Combining yield_per and eager loading

2016-07-20 Thread Martijn van Oosterhout
On 19 July 2016 at 23:22, Mike Bayer  wrote:

>
>
> On 07/19/2016 05:20 PM, Martijn van Oosterhout wrote:
>
>>
>>
>> Thanks. On the way home though I had a thought: wouldn't it be simpler
>> to run the original query with yield_from(), and then after each block
>> run the query with a filter on the primary keys returned, and add all
>> the joinedload/subqueryload/etc options to this query, run it and rely
>> on the identity map to fix it for the objects returned the first time.
>> Or is that something we cannot rely on?
>>
>
> it works for the loading you're doing, where the primary keys of what's
> been fetched are fed into the subsequent query.  But it doesnt work for
> current subquery loading which does not make use of those identifiers, nor
> for joined loading which does OUTER JOIN onto the original query at once
> (doing "joinedload" as a separate query is essentially what subquery
> loading already is).
>
>
Ah, good point. Pity. I like the whole generative interface for the
joinedload/subqueryload/etc and would have liked to reuse that machinery
somehow. Given I'm trying to eager load several levels of relationships,
it'd be nice to automate that somehow...

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.