Re: [sqlalchemy] Bulk Operations with Joined Table Inheritance

2017-01-25 Thread Robert Sami
On Wed, Jan 25, 2017 at 7:02 AM, mike bayer 
wrote:

>
>
> On 01/24/2017 07:40 PM, Robert Sami wrote:
>
>> Thanks for the response Mike,
>>
>> I agree that using Core is pretty clean. One approach I considered was
>> the following:
>>
>> ```
>> res = conn.execute(FooBase.__table__.insert(returning=[FooBase.id],
>> values=[{} for i in range(10)]))
>> conn.execute(FooDerived.__table__.insert(values=[{'id': _id, data:
>> 'whatever'} for _id, in res.fetchall()]))
>> ```
>>
>> This is similar to the approach you outlined above, but also robust to
>> other transactions inserting in the table.
>>
>
> OK, so usually RETURNING doesn't work for "executemany()", but I see there
> you are packing them into one big VALUES clause and ultimately using
> cursor.execute(), so that should work, though you want to chunk the sizes
> into batches of 1000 or so or your SQL statement will grow too large.


Ah, thanks for the tip!


>
>
>
>> The reason I would prefer to use `session.bulk_save_objects()` is that
>> this method is aware of default attribute values of objects. For example:
>>
>> ```
>> class FooDerived(..):
>>   ...
>>   data = db.Column(db.Integer, default=17)
>>
>
> that "default" is interpreted by the Core, not the ORM.   So your core
> statement should handle it too and you'd see those "17"s going in.  If not,
> let's get an MCVE and figure out why.
>
>
OK, thanks for explaining. FWIW this was an incorrect assumption on my
part, rather than based on any experience or observation, so I'll get you a
MCVE if anything unexpected comes up. Thanks for clarifying!


>
> ```
>>
>> Creating a bunch of `FooDerived` objects will automatically set the
>> `data` attributes to their default value. So I was hoping there was some
>> way of using `session.bulk_save_objects()` to a similar effect as the
>> Core approach I shared above, which uses a "RETURNING" clause to know
>> the primary keys of the newly inserted `FooBase` rows. If not, do you
>> have any other thoughts or suggestions on how to get the best of both
>> worlds?
>>
>> Many thanks!
>>
>> On Tue, Jan 24, 2017 at 3:00 PM, mike bayer > > wrote:
>>
>>
>>
>> On 01/24/2017 04:49 PM, Robert Sami wrote:
>>
>> Hi SQLAlchemy wizards.
>>
>> I was interested in using the new bulk operations API
>> (http://docs.sqlalchemy.org/en/latest/orm/persistence_techni
>> ques.html#bulk-operations
>> > ques.html#bulk-operations>)
>> but was looking for some advice based on my use case.
>>
>> I have a class “FooDerived” which corresponds to a table that is
>> linked
>> to “FooBase” using joined table inheritance. I want to use the
>> bulk_save_objects method to save, lets say 100,000 instances of
>> “FooDerived”.
>>
>> One option would be the following:
>>
>> ```
>> session.bulk_save_objects([FooBase() for i in range(10)])
>> session.flush()
>> foo_base_models = FooBase.query.filter(/* Assume its possible to
>> filter
>> for the newly created objects*/).all()
>> session.bulk_save_objects([FooDerived(id=base.id
>> ) for base in
>> foo_base_models])
>> ```
>>
>> Is there a better way?
>>
>>
>> this would be expressed much more clearly and efficiently using Core
>> constructs, and you need a way of knowing that primary key for
>> FooBase() because how you have it above where it auto-generates the
>> primary key, it would perform 100K SELECT statements :
>>
>>
>> foobase = FooBase.__table__
>> fooderived = FooDerived.__table__
>> with engine.begin() as conn:
>> my_first_pk = conn.scalar(select([func.max(foobase.c.id
>> )]))
>>
>> conn.execute(
>> foobase.insert(),
>> {"id": ident + my_first_pk, "data": "whatever"} for ident in
>> range(10)
>> )
>> conn.execute(
>> fooderived.insert(),
>> {"id": ident + my_firstpk, "data": "whatever"} for ident in
>> range(10)
>> )
>>
>>
>> of course you need to make sure no other transactions are INSERTing
>> rows with this approach or they will throw off your primary key
>> counter.
>>
>>
>>
>>
>>
>>
>>
>>
>> Thank you!
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete,
>> and
>> Verifiable Example. See http://stackoverflow.com/help/mcve
>>  for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails fro

Re: [sqlalchemy] Validators not called for unhashable items when replacing the collection

2017-01-25 Thread Pedro Werneck
Yes, I noticed the collection.decorator would imply doing the same
validation in two different places. That will have to work for now,
but I'll keep an eye on the issue for 1.2.

Thanks Mike.

On Wed, Jan 25, 2017 at 1:36 PM, mike bayer  wrote:
>
>
> On 01/24/2017 09:55 PM, Pedro Werneck wrote:
>>
>>
>> I have a relationship with a validator to automatically convert dicts
>> appended to the collection, so I can do something like this:
>>
>> my_obj.my_collection.append({"rel_type_id": x})
>>
>> Instead of this:
>>
>> my_obj.my_collection.append(RelType(rel_type_id=x))
>>
>> That works exactly as expected, but when I try to replace the whole
>> collection at once:
>>
>> my_obj.my_collection = [{"rel_type_id": x}]
>>
>> That results in a TypeError: unhashable type: 'dict', and the validator
>> method is never called. Apparently that happens when the
>> orm.collection.bulk_replace function uses sets to find the difference
>> between the old and the new collection. I don't see an straightforward
>> fix for that, it feels more like a limitation of the current
>> implementation than a bug.
>
>
> that's kind of beyond bug and more a design flaw.   The bulk replace wants
> to hit the event listener only for "new" items, but we can't decide on the
> "new" items without running the event handler.   The whole bulk replace idea
> would need to be changed to run the event listeners up front which suggests
> new events and whatnot.
>
>
> So here you'd need to use the "converter" implementation as well
> (http://docs.sqlalchemy.org/en/latest/orm/collections.html#sqlalchemy.orm.collections.collection.converter),
> here's a demo, unfortunately we need to mix both styles for complete
> coverage:
>
> class MyCollection(list):
>
> @collection.converter
> def convert(self, value):
> return [B(data=v['data']) for v in value]
>
>
> class A(Base):
> __tablename__ = 'a'
> id = Column(Integer, primary_key=True)
>
> bs = relationship("B", collection_class=MyCollection)
>
> @validates('bs')
> def _go(self, key, value):
> if not isinstance(value, B):
> value = B(data=value['data'])
> return value
>
>
> class B(Base):
> __tablename__ = 'b'
> id = Column(Integer, primary_key=True)
> a_id = Column(ForeignKey('a.id'))
> data = Column(String)
>
>
> I think in the future, what might be nice here would be a new attribute
> event so that "converter" doesn't need to be used, and then @validates can
> include @validates.collection_validate or similar to handle this case.   The
> collection hooks are generally assuming that they are dealing with how an
> incoming object should be represented within the collection, not how to
> coerce an incoming value (e.g. I tried to use @collection.appender here for
> the individual appends, no go), so "converter" being where it is, and not at
> value reception time, is inconsistent.  The amount of collection hooks
> present compared to how not possible this use case is is kind of a disaster.
>
> I've added
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3896/bulk_replace-assumes-incoming-values-are.
>
>
>
>
>
>
>>
>> It looks like I could do what I want with a custom collection and the
>> collection.converter decorator. Any other ideas?
>>
>>
>> Thanks.
>>
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to sqlalchemy+unsubscr...@googlegroups.com
>> .
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> .
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> --- You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/Utkrott4e0g/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.



-- 
---
Pedro Werneck

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

ht

Re: [sqlalchemy] Proposal to discontinue pymssql in favor of pyodbc

2017-01-25 Thread Jonathan Vanasco

On Wednesday, January 25, 2017 at 11:01:41 AM UTC-5, Randy Syring wrote:
>
>
>- pymssql has struggled to find maintainers who can devote time to it 
>and it is starting to languish.
>
>  Have you tried speaking with the "new" Microsoft?  Perhaps they'd be 
willing to contribute funds or engineers for a while.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Proposal to discontinue pymssql in favor of pyodbc

2017-01-25 Thread Randy Syring

Mike,

While I have not done work recently committing to the project, I am one 
of the maintainers.  I'm an owner of the pymssql Github organization, 
pay for the pymssql.org domain, and try to generally keep up with things 
as time permits.


Regarding being an "active" project, that is debatable.  Ramiro has done 
great work over the past year, but we have generally been unable to 
maintain the project as it should be maintained.  We have important 
issues out there, like a request for a new release for code that is 
already in GitHub, but just don't have the manpower to make the 
release.  These issues could easily be addressed if we had someone who 
could devote their effort in that direction, but we don't and have 
historically struggled to have anything like consistent effort put 
towards maintaining/improving pymssql.  That's not a complaint, the work 
that has been done is appreciated, it's just what we face.


I'm not proposing that pymssql be discontinued _simply_ because another 
project exists.  I'm proposing that it be discontinued because:


 * Microsoft has come out in-favor of ODBC and pyodbc and, with their
   support, pyodbc could be a technically superior product.
 * If Microsoft is supporting pyodbc, many new users will start there
   and probably not even look for another solution (like pymssql).
 * pymssql has struggled to find maintainers who can devote time to it
   and it is starting to languish.

So, my thought is, if we don't bring anything to the table that pyodbc 
doesn't bring, then why shouldn't we point people in that direction instead.


However, I appreciate your input here and on the GH issue that pymssql 
is more stable than pyodbc.  That is exactly the kind of information I'm 
looking for.



*Randy Syring*
Chief Executive Developer
Direct: 502.276.0459
Office: 812.285.8766
Level 12 

On 01/25/2017 10:47 AM, mike bayer wrote:
I don't see how it's appropriate to even suggest that an open source 
project close its doors simply because another project exists.If 
you were the maintainer of pymssql, that would be one thing, but 
looking at the commits it seems to continue to be an active project.


pymssql handles our tests more cleanly than pyodbc which has constant 
datatype issues,  and I have had several non-response-situations from 
the maintainer on the pyodbc side in the somewhat distant past 
(whereas I've had great response with pymssql issues), so unless the 
situation has vastly changed I'd prefer pymssql continue its excellent 
work.



On 01/25/2017 10:24 AM, Randy Syring wrote:

There is a proposal open to discontinue pymssql development and point
people towards pyodbc.  Since pymssql is a documented backend for SA, I
figured there might be some people here who are interested.

If you have any skin in that game and want to comment, please visit the
issue: https://github.com/pymssql/pymssql/issues/477

Thanks.

--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve for a full
description.
---
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.




--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Proposal to discontinue pymssql in favor of pyodbc

2017-01-25 Thread mike bayer
I don't see how it's appropriate to even suggest that an open source 
project close its doors simply because another project exists.If you 
were the maintainer of pymssql, that would be one thing, but looking at 
the commits it seems to continue to be an active project.


pymssql handles our tests more cleanly than pyodbc which has constant 
datatype issues,  and I have had several non-response-situations from 
the maintainer on the pyodbc side in the somewhat distant past (whereas 
I've had great response with pymssql issues), so unless the situation 
has vastly changed I'd prefer pymssql continue its excellent work.



On 01/25/2017 10:24 AM, Randy Syring wrote:

There is a proposal open to discontinue pymssql development and point
people towards pyodbc.  Since pymssql is a documented backend for SA, I
figured there might be some people here who are interested.

If you have any skin in that game and want to comment, please visit the
issue: https://github.com/pymssql/pymssql/issues/477

Thanks.

--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve for a full
description.
---
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Validators not called for unhashable items when replacing the collection

2017-01-25 Thread mike bayer



On 01/24/2017 09:55 PM, Pedro Werneck wrote:


I have a relationship with a validator to automatically convert dicts
appended to the collection, so I can do something like this:

my_obj.my_collection.append({"rel_type_id": x})

Instead of this:

my_obj.my_collection.append(RelType(rel_type_id=x))

That works exactly as expected, but when I try to replace the whole
collection at once:

my_obj.my_collection = [{"rel_type_id": x}]

That results in a TypeError: unhashable type: 'dict', and the validator
method is never called. Apparently that happens when the
orm.collection.bulk_replace function uses sets to find the difference
between the old and the new collection. I don't see an straightforward
fix for that, it feels more like a limitation of the current
implementation than a bug.


that's kind of beyond bug and more a design flaw.   The bulk replace 
wants to hit the event listener only for "new" items, but we can't 
decide on the "new" items without running the event handler.   The whole 
bulk replace idea would need to be changed to run the event listeners up 
front which suggests new events and whatnot.



So here you'd need to use the "converter" implementation as well 
(http://docs.sqlalchemy.org/en/latest/orm/collections.html#sqlalchemy.orm.collections.collection.converter), 
here's a demo, unfortunately we need to mix both styles for complete 
coverage:


class MyCollection(list):

@collection.converter
def convert(self, value):
return [B(data=v['data']) for v in value]


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)

bs = relationship("B", collection_class=MyCollection)

@validates('bs')
def _go(self, key, value):
if not isinstance(value, B):
value = B(data=value['data'])
return value


class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)
a_id = Column(ForeignKey('a.id'))
data = Column(String)


I think in the future, what might be nice here would be a new attribute 
event so that "converter" doesn't need to be used, and then @validates 
can include @validates.collection_validate or similar to handle this 
case.   The collection hooks are generally assuming that they are 
dealing with how an incoming object should be represented within the 
collection, not how to coerce an incoming value (e.g. I tried to use 
@collection.appender here for the individual appends, no go), so 
"converter" being where it is, and not at value reception time, is 
inconsistent.  The amount of collection hooks present compared to how 
not possible this use case is is kind of a disaster.


I've added 
https://bitbucket.org/zzzeek/sqlalchemy/issues/3896/bulk_replace-assumes-incoming-values-are.









It looks like I could do what I want with a custom collection and the
collection.converter decorator. Any other ideas?


Thanks.


--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve for a full
description.
---
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Proposal to discontinue pymssql in favor of pyodbc

2017-01-25 Thread Randy Syring
There is a proposal open to discontinue pymssql development and point 
people towards pyodbc.  Since pymssql is a documented backend for SA, I 
figured there might be some people here who are interested.

If you have any skin in that game and want to comment, please visit the 
issue: https://github.com/pymssql/pymssql/issues/477

Thanks.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bulk Operations with Joined Table Inheritance

2017-01-25 Thread mike bayer



On 01/24/2017 07:40 PM, Robert Sami wrote:

Thanks for the response Mike,

I agree that using Core is pretty clean. One approach I considered was
the following:

```
res = conn.execute(FooBase.__table__.insert(returning=[FooBase.id],
values=[{} for i in range(10)]))
conn.execute(FooDerived.__table__.insert(values=[{'id': _id, data:
'whatever'} for _id, in res.fetchall()]))
```

This is similar to the approach you outlined above, but also robust to
other transactions inserting in the table.


OK, so usually RETURNING doesn't work for "executemany()", but I see 
there you are packing them into one big VALUES clause and ultimately 
using cursor.execute(), so that should work, though you want to chunk 
the sizes into batches of 1000 or so or your SQL statement will grow too 
large.





The reason I would prefer to use `session.bulk_save_objects()` is that
this method is aware of default attribute values of objects. For example:

```
class FooDerived(..):
  ...
  data = db.Column(db.Integer, default=17)


that "default" is interpreted by the Core, not the ORM.   So your core 
statement should handle it too and you'd see those "17"s going in.  If 
not, let's get an MCVE and figure out why.




```

Creating a bunch of `FooDerived` objects will automatically set the
`data` attributes to their default value. So I was hoping there was some
way of using `session.bulk_save_objects()` to a similar effect as the
Core approach I shared above, which uses a "RETURNING" clause to know
the primary keys of the newly inserted `FooBase` rows. If not, do you
have any other thoughts or suggestions on how to get the best of both
worlds?

Many thanks!

On Tue, Jan 24, 2017 at 3:00 PM, mike bayer mailto:mike...@zzzcomputing.com>> wrote:



On 01/24/2017 04:49 PM, Robert Sami wrote:

Hi SQLAlchemy wizards.

I was interested in using the new bulk operations API

(http://docs.sqlalchemy.org/en/latest/orm/persistence_techniques.html#bulk-operations

)
but was looking for some advice based on my use case.

I have a class “FooDerived” which corresponds to a table that is
linked
to “FooBase” using joined table inheritance. I want to use the
bulk_save_objects method to save, lets say 100,000 instances of
“FooDerived”.

One option would be the following:

```
session.bulk_save_objects([FooBase() for i in range(10)])
session.flush()
foo_base_models = FooBase.query.filter(/* Assume its possible to
filter
for the newly created objects*/).all()
session.bulk_save_objects([FooDerived(id=base.id
) for base in
foo_base_models])
```

Is there a better way?


this would be expressed much more clearly and efficiently using Core
constructs, and you need a way of knowing that primary key for
FooBase() because how you have it above where it auto-generates the
primary key, it would perform 100K SELECT statements :


foobase = FooBase.__table__
fooderived = FooDerived.__table__
with engine.begin() as conn:
my_first_pk = conn.scalar(select([func.max(foobase.c.id
)]))
conn.execute(
foobase.insert(),
{"id": ident + my_first_pk, "data": "whatever"} for ident in
range(10)
)
conn.execute(
fooderived.insert(),
{"id": ident + my_firstpk, "data": "whatever"} for ident in
range(10)
)


of course you need to make sure no other transactions are INSERTing
rows with this approach or they will throw off your primary key counter.








Thank you!

--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve
 for a full
description.
---
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from
it, send
an email to sqlalchemy+unsubscr...@googlegroups.com

>.
To post to this group, send email to sqlalchemy@googlegroups.com

>.
Visit this group at https://groups.google.com/group/sqlalchemy
.
For more options, visit https://groups.google.com/