Re: [sqlalchemy] require explicit FROMs

2017-06-26 Thread Dave Vitek
In case anyone tries to use the compile visitor below, I'll document a 
couple snags I ran into.  For now, I think I'm satisfied to go back to 
my original patch and deal with any merge conflicts during upgrades.


The hook has false positives when it runs on inner queries that 
correlate to tables from outer queries.  I think these could be 
addressed by borrowing most of the logic from _get_display_froms having 
to do with dropping extra FROMs to make this work right, although you 
would need to get the values for explicit_correlate_froms and 
implicit_correlate_froms from somewhere.  I'm sure it could be made to 
work eventually.


Additionally, it raises even in contexts where no sql is being 
executed.  For example, if someone renders a yet-to-be-nested select to 
a string for debugging purposes, they might get an exception. It's also 
a bit confusing to the client, because if they render the selectable, it 
will show them the FROM entity that it simultaneously claims it missing 
(that is, if you had a way of preventing the exception in the rendering 
process).  Sending the bad SQL to the RDBMS avoids these two caveats.


Also, I found it necessary to use col._from_objects instead of col.table 
for non-trivial column elements.




After all that, what I can suggest for you is that since you are 
looking to "raise an error", that is actually easy to do here using 
events or compiles.   All you need to do is compare the calculated 
FROM list to the explicit FROM list:


from sqlalchemy import table, column, select
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql import Select
from sqlalchemy import exc


@compiles(Select)
def _strict_froms(stmt, compiler, **kw):
actual_froms = stmt.froms

# TODO: tune this as desired
if stmt._from_obj:
# TODO: we can give you non-underscored access to
# _from_obj
legal_froms = stmt._from_obj
else:
legal_froms = set([
col.table for col in stmt.inner_columns
])
if len(legal_froms) != len(actual_froms):
raise exc.CompileError("SELECT has implicit froms")

return compiler.visit_select(stmt, **kw)


t1 = table('t1', column('a'))
t2 = table('t2', column('b'))


# works
print select([t1]).where(t2.c.b == 5).select_from(t1).select_from(t2)

# raises
print select([t1]).where(t2.c.b == 5)

The issue of "FROM with comma" *is* a problem I'd like to continue to 
explore solutions towards.   I've long considered various rules in 
_display_froms but none have ever gotten through the various issues 
I've detailed above.I hope this helps to clarify the complexity 
and depth of this situation.





--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] require explicit FROMs

2017-06-25 Thread Dave Vitek

Mike,

Thanks for the thorough feedback. It isn't surprising that there's a lot 
of code out there relying on the current behavior, but internal 
sqlalchemy reliance is certainly problematic.  It sounds difficult to 
walk back the existing behavior at this point and I understand the 
apprehension about even adding a warning at this point.


Perhaps it could be an opt-in warning for projects that subscribe to the 
"explicit from" requirement.  People who contribute to multiple projects 
might still be annoyed that sqlalchemy complains differently from one 
project to the next, but I think there's plenty of precedent for 
projects having rules about what subset of library and language features 
they allow.  Once a project has been bitten by this sort of bug one too 
many times, they might look into enabling the warning.  Not ideal, but 
better than no recourse at all?  The internal ORM uses of implicit FROMs 
would need to be cleaned up for this to really work.


However, in my experience warnings are ignored unless they trigger test 
failures.  Furthermore, I am worried in part about these bugs being 
security bugs, in which case I might prefer an exception even in a 
production setting.  Then again, I might not, since most of our implicit 
FROMs that weren't recently introduced were producing intended behavior.


With respect to detecting implicit FROMs by noticing slow queries or 
unexpected behavior, that works OK with some queries, but it still takes 
significantly more human time than detecting things more explicitly and 
earlier in the pipeline.


There are cases where an under-constrained query runs faster, and fairly 
specific inputs would be required for a human to notice the bad 
behavior.  Specifically, I think under-constrained EXISTS clauses 
frequently go undetected for some time.  Some query that is supposed to 
check whether a particular row in a table has some relationship ends up 
checking whether any row in the table has some relationship.  To make 
matters worse, it isn't entirely unusual for these two behaviors to be 
indistinguishable for typical data sets (perhaps because all the rows 
behave the same, or the tables are frequently singletons).  In my worst 
nightmare, the intent might be to check if Alice has permission to do X 
to Y, but the query ends up asking whether Alice has permission to do X 
to anything, or whether anyone has permission to do X to Y.


The compile hook looks great and I will take a look at the test failures 
caused by reliance on these features in the ORM, if only to see whether 
we use or foresee using those features.


- Dave

On 6/24/2017 5:33 PM, Mike Bayer wrote:



On 06/24/2017 10:53 AM, Dave Vitek wrote:

Hi all,

I'll start with an example.  Assume A and B are Table objects:

 >>> print select([A.id], from_obj=[A], whereclause=(B.field == 123))
SELECT A.id FROM A, B WHERE B.id = 123

As a convenience, sqlalchemy has added B to the FROM clause on the 
assumption that I meant to do this.


However, in my experience, this behavior has hidden many bugs.


So yes, the implicit FROM behavior is for a very long time the 
ultimate symptom of many other bugs, probably moreso in the ORM itself 
within routines such as eager loading and joined table inheritance.  A 
lot of ORM bugs manifest as this behavior. However, as often as this 
is the clear sign of a bug, it is actually not a bug in a whole bunch 
of other cases when the "FROM comma list" is what's intended.


 It isn't
a feature we intentionally rely on. Usually, someone forgot a join 
and the query sqlalchemy invents is drastically underconstrained.  
These bugs sometimes go unnoticed for a long time.  I want to raise 
an exception when this happens instead of letting the computer take a 
guess at what the human meant.


So to continue from my previous statement that "comma in the FROM 
clause" is more often than not not what was intended, there is a large 
class of exceptions to this, which is all those times when this *is* 
what's intended.


Within the ORM, the most common place the FROM clause w/ comma is seen 
is in the lazy load with a "secondary" table.   In modern versions, 
the two tables are added as explicit FROM, so your patch still allows 
this behavior to work, though in older versions it would have failed:


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


atob = Table(
'atob', Base.metadata,
Column('aid', ForeignKey('a.id'), primary_key=True),
Column('bid', ForeignKey('b.id'), primary_key=True)
)


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
bs = relationship("B", secondary=atob)


class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)

e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add(A(bs=[B(), B()]))

[sqlalchemy] require explicit FROMs

2017-06-24 Thread Dave Vitek

Hi all,

I'll start with an example.  Assume A and B are Table objects:

>>> print select([A.id], from_obj=[A], whereclause=(B.field == 123))
SELECT A.id FROM A, B WHERE B.id = 123

As a convenience, sqlalchemy has added B to the FROM clause on the 
assumption that I meant to do this.


However, in my experience, this behavior has hidden many bugs. It isn't 
a feature we intentionally rely on.  Usually, someone forgot a join and 
the query sqlalchemy invents is drastically underconstrained.  These 
bugs sometimes go unnoticed for a long time.  I want to raise an 
exception when this happens instead of letting the computer take a guess 
at what the human meant.


I have modified _get_display_froms from selectable.py to return fewer 
FROM entities.  So far, this seems to be working.  Can you think of any 
pitfalls with this change?


I've attached a copy of the patch in case you want to take a look.  If 
there is interest in merging the change, it would need to be made opt-in 
(how is this best done globally--not in a per-query fashion?).


- Dave


--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
Index: selectable.py
===
--- selectable.py   (revision 137582)
+++ selectable.py   (working copy)
@@ -2746,8 +2746,7 @@
 
 GenerativeSelect.__init__(self, **kwargs)
 
-@property
-def _froms(self):
+def _froms_impl(self, *from_iterables):
 # would love to cache this,
 # but there's just enough edge cases, particularly now that
 # declarative encourages construction of SQL expressions
@@ -2756,12 +2755,7 @@
 seen = set()
 translate = self._from_cloned
 
-for item in itertools.chain(
-_from_objects(*self._raw_columns),
-_from_objects(self._whereclause)
-if self._whereclause is not None else (),
-self._from_obj
-):
+for item in itertools.chain(*from_iterables):
 if item is self:
 raise exc.InvalidRequestError(
 "select() construct refers to itself as a FROM")
@@ -2773,6 +2767,30 @@
 
 return froms
 
+@property
+def _froms(self):
+return self._froms_impl(
+_from_objects(*self._raw_columns),
+_from_objects(self._whereclause)
+if self._whereclause is not None else (),
+self._from_obj)
+
+@property
+def _display_froms(self):
+if len(self._from_obj) == 0:
+column_froms = set(_from_objects(*self._raw_columns))
+if len(column_froms) == 1:
+# If the caller didn't specify any FROM entities, and
+# they only seem to be selecting columns from one
+# table, then it is very likely they expect the one
+# table to be selected from.  We could eliminate this
+# case (as it goes against the spirit of mandating
+# explicit FROMs) if we eliminated the various uses of
+# Session.query(some_column).
+return self._froms_impl(column_froms)
+# Only use FROM entities the caller explicitly requested.
+return self._froms_impl(self._from_obj)
+
 def _get_display_froms(self, explicit_correlate_froms=None,
implicit_correlate_froms=None):
 """Return the full list of 'from' clauses to be displayed.
@@ -2783,7 +2801,7 @@
 correlating.
 
 """
-froms = self._froms
+froms = self._display_froms
 
 toremove = set(itertools.chain(*[
 _expand_cloned(f._hide_froms)


[sqlalchemy] invalidating Connection objects using DBAPI connections

2016-10-19 Thread Dave Vitek

Hi all,

I'm using sqlalchemy 0.9.7 (yes, I know we need to upgrade).

I've recently started to employ psycopg2's 
psycopg2.extensions.set_wait_callback function to facilitate terminating 
ongoing queries in response to activity on other sockets (such as an 
HTTP client disconnect or shutdown message).  I essentially have this 
working, but there is a wrinkle.


Specifically, when it is time to abort the ongoing query, I arrange for 
the wait_callback to raise an exception.  psycopg2 dutifully closes the 
dbapi connection when an exception occurs, which is what I want it to 
do.  The problem is, sqlalchemy still thinks the connection is open at 
the Connection and ConnectionFairy layers and sometimes runs into a 
variety of "connection closed" errors when sqlalchemy tries to do stuff 
with the dbapi connection later because it thinks it is still open.


What I want to do is call Connection.invalidate() from inside the 
wait_callback, in order to invalidate the Connection, ConnectionRecord, 
and ConnectionFairy.  This is difficult because the only parameter to 
the wait_callback function is the DBAPI connection, not the associated 
high level connection object(s).  I have no obvious way of getting from 
the DBAPI connection to the SA Connection object(s).


I've had a few ideas on how to solve this, one of which seems to work in 
my limited testing, but the documentation gives me second thoughts.  I 
wonder if there is a better way.


Approach 1
The first thing I tried (which did not work at all) was to use:
event.listens_for(SomeEngine, 'handle_error')

in order that I could set the is_disconnect field on the 
exception_context object, thereby notifying sqlalchemy of the 
disconnect.  Turns out that sqlalchemy doesn't wrap calls to fetchmany 
such that exceptions from fetchmany get sent to handle_error, so I 
abandoned this approach.  I wonder if this is indicative of a more 
general issue where network errors raised by fetchmany aren't noticed?


Approach 2
The next thing I tried was using:
@event.listens_for(SomeEngine, 'engine_connect')
def receive_engine_connect(conn, branch):
raw = conn.connection.connection
if not hasattr(raw, 'sa_conns'):
raw.sa_conns = weakref.WeakSet()
raw.sa_conns.add(conn)

And then when I know the dbapi connection is about to be closed for 
sure, I do something like this (but more carefully):

for x in getattr(dbapi_conn, 'sa_conns', ()):
x.invalidate()

This seemed to work in practice, but this sentence from the 
documentation and the body of Connection._revalidate_connection() make 
me think existing Connection objects can get reassociated with different 
dbapi connections without my knowledge:


"But note there can in fact be multiple PoolEvents.checkout() events 
within the lifespan of a single Connection object, if that Connection is 
invalidated and re-established."



Approach 3
This is similar to approach 2, except that I use the "checkout" event 
instead.  This allows me to invoke ConnectionFairy.invalidate() or 
ConnectionRecord.invalidate(), but that seems to not be as good as 
invoking Connection.invalidate(), because the Connection still doesn't 
realize it is closed.






Is there something better I can do to get a mapping from DBAPI 
connection to SA Connection?  I am inclined to do linear search over all 
Connection objects for ones using the dying dbapi connection.  I assume 
the engine or pool must have a collection of all Connections, and there 
aren't ever more than a few in my application.


- Dave

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] contains_eager bug

2015-09-02 Thread Dave Vitek

Hi all,

I've come across what I'm pretty sure is a bug with contains_eager when 
there are multiple joinpaths leading to the same row, and only some of 
those joinpaths are using contains_eager all they way down the joinpath.


I've prepared a test case:
http://pastebin.com/CbyUMdqC

See the text at the top of the test case for further details.

- Dave

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Sane way to monitor external DB changes from application?

2015-09-02 Thread Dave Vitek
At least one database (postgres) has a pub/sub messaging facility 
(NOTIFY/LISTEN) that you can use to do this.  See the postgres docs.  We 
use this extensively.


On the listen end, you basically want to get down to the psycopg layer, 
because sqlalchemy's layers aren't going to be helpful.


1.  Get it using engine.raw_connection()
2.  Detach it from the thread pool (c.detach())
3.  Change the isolation level to autocommit 
c.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
(note that if you return the connection to the connection pool, this 
change may survive after the connection is returned to the pool, causing 
subtle havoc later)

4. Call c.poll() to wait for events (see psycopg2 docs)
5. Use c.notifies to get messages

On the notify end, you don't need to do these special things to the 
connection and can issue messages using raw sql text on sqlalchemy 
connection objects.


I would assume other backends will be completely different.  You can use 
select/epoll/whatever to do async IO if needed.  You may find postgres' 
advisory locks useful if any synchronization needs arise.


References:
http://www.postgresql.org/docs/9.4/static/sql-listen.html
http://initd.org/psycopg/docs/advanced.html#asynchronous-notifications

- Dave

On 9/2/2015 8:48 PM, Ken Lareau wrote:
I'm going to try to see if I can give enough detail here to allow 
folks to make sense, but I will be simplifying things a bit to prevent 
a 1,000+ line email, too...


So I have an in-house application that handles software deployments.  
It uses a database backend to keep track of current deployments and 
maintain history (actually, the database is used more generally for 
our Site Operations site management with specific tables created just 
for the application).  I am working on a major refactoring of the 
deployment code itself where the actually installation of the software 
is handled by a daemon that runs constantly in the background looking 
for new deployments to perform.  The key tables look like this:



class Deployment(Base):
__tablename__ = 'deployments'

id = Column(u'DeploymentID', INTEGER(), primary_key=True)
package_id = Column(
INTEGER(),
ForeignKey('packages.package_id', ondelete='cascade'),
nullable=False
)

package = relationship(
"Package",
uselist=False,
back_populates='deployments'
)

user = Column(String(length=32), nullable=False)
status = Column(
Enum('pending', 'queued', 'inprogress', 'complete', 'failed',
 'canceled', 'stopped'),
server_default='pending',
nullable=False,
)
declared = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)
created_at = synonym('declared')
app_deployments = relationship(
'AppDeployment', order_by="AppDeployment.created_at, 
AppDeployment.id"

)
host_deployments = relationship(
'HostDeployment', order_by="HostDeployment.created_at, 
HostDeployment.id"

)


class AppDeployment(Base):
__tablename__ = 'app_deployments'

id = Column(u'AppDeploymentID', INTEGER(), primary_key=True)
deployment_id = Column(
u'DeploymentID',
INTEGER(),
ForeignKey('deployments.DeploymentID', ondelete='cascade'),
nullable=False
)
app_id = Column(
u'AppID',
SMALLINT(display_width=6),
ForeignKey('app_definitions.AppID', ondelete='cascade'),
nullable=False
)

application = relationship("AppDefinition", uselist=False)
target = synonym('application')
deployment = relationship("Deployment", uselist=False)

user = Column(String(length=32), nullable=False)
status = Column(
Enum(
'complete',
'incomplete',
'inprogress',
'invalidated',
'validated',
),
nullable=False
)
environment_id = Column(
u'environment_id',
INTEGER(),
ForeignKey('environments.environmentID', ondelete='cascade'),
nullable=False
)
realized = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)
created_at = synonym('realized')

environment_obj = relationship('Environment')


class HostDeployment(Base):
__tablename__ = 'host_deployments'

id = Column(u'HostDeploymentID', INTEGER(), primary_key=True)
deployment_id = Column(
u'DeploymentID',
INTEGER(),
ForeignKey('deployments.DeploymentID', ondelete='cascade'),
nullable=False
)

deployment = relationship("Deployment", uselist=False)

host_id = Column(
u'HostID',
INTEGER(),
ForeignKey('hosts.HostID', ondelete='cascade'),
nullable=False
)
host = relationship("Host", uselist=False)

user = Column(String(length=32), nullable=False)
status = Column(Enum('inprogress', 'failed', 'ok'), nullable=False)

Re: [sqlalchemy] contains_eager bug

2015-09-02 Thread Dave Vitek

Answering my own question.

Here's a patch that seems to fix it, but I am uncertain about whether it 
breaks other things.  Use at your own risk, at least until someone more 
familiar with this code evaluates it.


Index: lib/sqlalchemy/orm/strategies.py
===
--- lib/sqlalchemy/orm/strategies.py(revision 114304)
+++ lib/sqlalchemy/orm/strategies.py(working copy)
@@ -1474,20 +1474,24 @@
 def _create_scalar_loader(self, context, key, _instance):
 def load_scalar_from_joined_new_row(state, dict_, row):
 # set a scalar object instance directly on the parent
 # object, bypassing InstrumentedAttribute event handlers.
 dict_[key] = _instance(row, None)

 def load_scalar_from_joined_existing_row(state, dict_, row):
 # call _instance on the row, even though the object has
 # been created, so that we further descend into properties
 existing = _instance(row, None)
+if key in dict_:
+assert dict_[key] is existing
+else:
+dict_[key] = existing
 if existing is not None \
 and key in dict_ \
 and existing is not dict_[key]:
 util.warn(
 "Multiple rows returned with "
 "uselist=False for eagerly-loaded attribute '%s' "
 % self)

 def load_scalar_from_joined_exec(state, dict_, row):
 _instance(row, None)

- Dave

On 9/2/2015 7:29 PM, Dave Vitek wrote:

Hi all,

I've come across what I'm pretty sure is a bug with contains_eager 
when there are multiple joinpaths leading to the same row, and only 
some of those joinpaths are using contains_eager all they way down the 
joinpath.


I've prepared a test case:
http://pastebin.com/CbyUMdqC

See the text at the top of the test case for further details.

- Dave



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] contains_eager bug

2015-09-02 Thread Dave Vitek

On 9/2/2015 10:08 PM, Mike Bayer wrote:



On 9/2/15 9:57 PM, Dave Vitek wrote:

Answering my own question.

Here's a patch that seems to fix it, but I am uncertain about whether 
it breaks other things.  Use at your own risk, at least until someone 
more familiar with this code evaluates it.



this is a variant of a known issue, that of object X is being loaded 
in different contexts in the same row, with different loader 
strategies applied.  The first context that hits it wins.


I'm adding your test case to that issue at 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3431/object-eager-loaded-on-scalar-relationship. 



the patch which is there seems to fix this test as well - it is 
extremely similar to your patch, so nice job!




I have a question about your patch.  If "existing" is None and is not 
present in dict_, it does not get placed into dict_.  I note that 
load_scalar_from_joined_new_row puts it into dict_ regardless of whether 
it is None.  Is this intentional/good?








Index: lib/sqlalchemy/orm/strategies.py
===
--- lib/sqlalchemy/orm/strategies.py(revision 114304)
+++ lib/sqlalchemy/orm/strategies.py(working copy)
@@ -1474,20 +1474,24 @@
 def _create_scalar_loader(self, context, key, _instance):
 def load_scalar_from_joined_new_row(state, dict_, row):
 # set a scalar object instance directly on the parent
 # object, bypassing InstrumentedAttribute event handlers.
 dict_[key] = _instance(row, None)

 def load_scalar_from_joined_existing_row(state, dict_, row):
 # call _instance on the row, even though the object has
 # been created, so that we further descend into properties
 existing = _instance(row, None)
+if key in dict_:
+assert dict_[key] is existing
+else:
+dict_[key] = existing
 if existing is not None \
 and key in dict_ \
 and existing is not dict_[key]:
 util.warn(
 "Multiple rows returned with "
 "uselist=False for eagerly-loaded attribute '%s' "
 % self)

 def load_scalar_from_joined_exec(state, dict_, row):
 _instance(row, None)

- Dave

On 9/2/2015 7:29 PM, Dave Vitek wrote:

Hi all,

I've come across what I'm pretty sure is a bug with contains_eager 
when there are multiple joinpaths leading to the same row, and only 
some of those joinpaths are using contains_eager all they way down 
the joinpath.


I've prepared a test case:
http://pastebin.com/CbyUMdqC

See the text at the top of the test case for further details.

- Dave







--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.