Hi list -
SQLAlchemy 0.8.0b2 is released. This is hopefully the last beta for 0.8, as
things have been mostly quiet surrounding 0.8.0b1, we had a few regressions
fixed, and a bunch of other new bugs whose fixes are only in 0.8.
There was a little bit of an additional refactor in this beta
On Fri, Dec 14, 2012 at 1:11 PM, Michael Bayer mike...@zzzcomputing.comwrote:
Hi list -
SQLAlchemy 0.8.0b2 is released. This is hopefully the last beta for 0.8,
as things have been mostly quiet surrounding 0.8.0b1, we had a few
regressions fixed, and a bunch of other new bugs whose fixes
I have just updated SQLAlchemy from 0.7.8 to 0.8.0b2 (the current pip
default) and the
DropEverythinghttp://www.sqlalchemy.org/trac/wiki/UsageRecipes/DropEverythingrecipe
has stopped working. The problem is on the DropTable line with this
error :
sqlalchemy.exc.InternalError: (InternalError)
I have 2 tables with a third intermediary table. These tables are
(shorted): https://gist.github.com/9ff8afa793c9150c6b70
Using this the association_proxy correctly reuses existing rows in the
database if they already exist. However if I do this:
v = Version.query.first()
v.classifiers =
Nothing has changed regarding that recipe and I just ran it on a small set of
tables against Postgresql (which I can see is the DB you're using) and it runs
fine. Are you sure the identical schema *does* drop completely when this
recipe is run directly with 0.7.8 ?Can you provide the
I've cobbled together a complete and simplified test case given your mapping
and example code and I cannot reproduce with either 0.7 or 0.8 - the count of
rows in the association table is one on the first commit, and two on the second.
You need to adapt the attached test case into a full
Hrm. I'll see what I can do. Though looking at what you posted it works for
me with that too.. So the problem must either be with Flask-SQLAlchemy or
with my own app code.
On Friday, December 14, 2012 11:30:57 PM UTC-5, Michael Bayer wrote:
I've cobbled together a complete and simplified test
its probably some subtlety to the data that's already loaded and how the
collection is being mutated - it's unlikely Flask has anything to do with it.
There may or may not be some less-than-ideal or buggy behavior in association
proxy, or it might be a premature flushing issue, but if you can
Ok, so what I've got so far. I believe it's related to the
association_proxy as (Using my application code, not the test case):
v = Version.query.first()
v._classifiers = [Classifier.get_or_create(uFoo)]
db.session.commit()
v._classifiers = [Classifier.get_or_create(uFoo),