Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Mike Bayer



On 4/20/15 12:56 PM, Guido Winkelmann wrote:
I just tested, the problem is still present in the current master 
(bd61e7a3287079cf742f4df698bfe3628c090522 from github). Guido W. 


can you please try current master at least as of 
a3af638e1a95d42075e25e874746, thanks.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Oliver Palmer
Hey original developer pyfarm-master here, Guido pointed me at this thread. 
 I've run a test with a3af638e1a95d42075e25e874746 and the sqlite tests are 
still failing to drop the tables:

https://travis-ci.org/pyfarm/pyfarm-master/builds/59341150


Since Guido commented however I merged a PR he proposed and that seems to 
have fixed most of our other failures (everything except for the drop 
tables issue with sqlite).  I started to think there's something different 
with respect to sqlite and wrote this which does almost exactly what the 
tests do (and it always works):

import os
os.environ.update(
PYFARM_DATABASE_URI=sqlite:///:memory:
)

from pyfarm.master.application import db

for i in range(5):
# setUp
from pyfarm.models.agent import Agent
from pyfarm.models.job import Job
from pyfarm.models.jobtype import JobType
from pyfarm.models.software import (
Software, SoftwareVersion, JobSoftwareRequirement,
JobTypeSoftwareRequirement)
from pyfarm.models.tag import Tag
from pyfarm.models.task import Task
from pyfarm.models.user import User
from pyfarm.models.jobqueue import JobQueue
from pyfarm.models.gpu import GPU
db.create_all()

# execute tests

# tearDown
db.session.remove()
db.drop_all()


There's a gist of the above here for those who have issues displaying the 
above properly: https://gist.github.com/opalmer/0850879794e81198c3a0

If you run a test individually, it also always works:

env PYFARM_DATABASE_URI=sqlite:///:memory: nosetests 
tests/test_models/test_model_users.py:UserTest.test_user_auth_token


However if you run something that requires a few test iterations it almost 
always fails when dropping the tables:
 

env PYFARM_DATABASE_URI=sqlite:///:memory: nosetests 
tests/test_models/test_model_users.py:UserTest 


I say almost always fails because on occasion the above will pass too. 
 I'm only using the above test case as an example but other tests seem to 
have the same problem.

So I got to thinking about what we're doing differently with sqlite and 
this bit of code comes to mind:


# sqlite specific configuration for development
if db.engine.name == sqlite:
@event.listens_for(Engine, connect)
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute(PRAGMA foreign_keys=ON)
cursor.execute(PRAGMA synchronous=OFF)
cursor.execute(PRAGMA journal_mode=MEMORY)
cursor.close()


If I comment the above out in our application.py 
https://github.com/pyfarm/pyfarm-master/blob/f22912cd7d89b93c146801fd1575ff06f4883724/pyfarm/master/application.py#L208
 module 
the second nosetests example above works without issues.  Here's a test on 
travis with sqlalchemy 1.0.0 and the above code commented out:

https://travis-ci.org/pyfarm/pyfarm-master/builds/59345110


And from the latest master (didn't expect a difference here but wanted to 
be sure):

https://travis-ci.org/pyfarm/pyfarm-master/builds/59346659


And here's the same tests with the same code above commented out with 
sqlalchemy 0.9.9

https://travis-ci.org/pyfarm/pyfarm-master/builds/59346146


 
So I'm not sure yet if this is a bug in sqlalchemy 1.0.0+ or not because I 
didn't dive that deeply into the code changes for event handling. 
 Regardless, I think this is probably something we should update in our 
tests anyway since we could couple the execution of those pragma statements 
more closely with the tests as they run to avoid the issue in the future. 
 I would be curious to know if this is actually a bug in event handling 
though.

---Oliver

On Monday, April 20, 2015 at 7:22:47 PM UTC-4, Michael Bayer wrote:



 On 4/20/15 12:56 PM, Guido Winkelmann wrote: 
  I just tested, the problem is still present in the current master 
  (bd61e7a3287079cf742f4df698bfe3628c090522 from github). Guido W. 

 can you please try current master at least as of 
 a3af638e1a95d42075e25e874746, thanks. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Guido Winkelmann
Hi,

Have there been any non-backwards-compatible changes in SQLAlchemy 1.0 
compared to 0.9.9?

We are seeing a lot of sudden breakage in our unit tests when switching to 
SQLAlchemy 1.0 from 0.9.9. Tests that worked fine before suddenly fail 
across the board.

Here's a an example of a test build that suddenly failed on 1.0:

https://travis-ci.org/pyfarm/pyfarm-master/builds/58860924

If you compare the builds on sqlite with those MySQL/PostgreSQL, you will 
see there are two different, seemingly unrelated things going wrong:

On sqlite, drop_all() seems to fail to get the order of table drops right, 
and consequently runs into a referential integrity error.

On MySQL/PostgreSQL, this line fails:

association = TaskTaskLogAssociation.query.filter_by(task=task, 
log=task_log, attempt=attempt).first()

In this context, log is a relationship in the 
model TaskTaskLogAssociation to model TaskLog. task_log is an object of 
type TaskLog, but one that has never been written to the database and has 
no set id. That leads to this error message in the logs:

nose.proxy.ProgrammingError: (psycopg2.ProgrammingError) function 
symbol(unknown) does not exist
LINE 3: ...72015052936_task_log_associations.attempt = 1 AND symbol('NE...
 ^
HINT:  No function matches the given name and argument types. You might 
need to add explicit type casts.
 [SQL: 'SELECT test29172015052936_task_log_associations.task_log_id AS 
test29172015052936_task_log_associations_task_log_id, 
test29172015052936_task_log_associations.task_id AS 
test29172015052936_task_log_associations_task_id, 
test29172015052936_task_log_associations.attempt AS 
test29172015052936_task_log_associations_attempt, 
test29172015052936_task_log_associations.state AS 
test29172015052936_task_log_associations_state \nFROM 
test29172015052936_task_log_associations \nWHERE 
test29172015052936_task_log_associations.attempt = %(attempt_1)s AND 
%(param_1)s = test29172015052936_task_log_associations.task_log_id AND 
%(param_2)s = test29172015052936_task_log_associations.task_id \n LIMIT 
%(param_3)s'] [parameters: {'param_1': symbol('NEVER_SET'), 'attempt_1': 1, 
'param_2': 1, 'param_3': 1}]

Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of the 
model used for filtering should be.

It may be a bit questionable to filter by a model that doesn't even exist 
in the database, but, again, this used to work fine in 0.9.9.

Regards,
  Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Michael Bayer




 On Apr 20, 2015, at 12:56 PM, Guido Winkelmann 
 gu...@ambient-entertainment.de wrote:
 
 On Monday 20 April 2015 11:23:06 Mike Bayer wrote:
 On 4/20/15 8:09 AM, Guido Winkelmann wrote:
 [...]
 On sqlite, drop_all() seems to fail to get the order of table drops
 right, and consequently runs into a referential integrity error.
 
 If you can post a reproducible issue, that's what I can work with.
 
 I'm afraid the best I can offer right now is the current state of the pyfarm-
 master code base.  It's 100% reproducible there, but it's not exactly a 
 reduced test case...
 
 There are changes to how tables are sorted in the absence of foreign key
 dependency, where this ordering was previously undefined, it is now
 determinstic; see
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-aab33
 2eedafc8e090f42b89ac7a67e6c.
 On MySQL/PostgreSQL, this line fails:
 
 Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of
 the model used for filtering should be.
 
 this is a known regression and is fixed in 1.0.1:
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.1
 
 
 if you can confirm with current master that this is fixed I can release
 today or tomorrow as this particular regression is fairly severe.
 
 I just tested, the problem is still present in the current master 
 (bd61e7a3287079cf742f4df698bfe3628c090522 from github).


Oh, read your text, while you haven't provided a code sample it sounds like you 
are possibly saying filter(Foo.relationship == some_transient_object) and 
expecting that all the None values come out.   Yes?   That is just the kind of 
example of just happened to work I'm talking about.   Can you confirm this is 
what you are doing please ?   Hopefully can find a fix for that.There is an 
entry detailing the behavioral change here but these effects were unanticipated 
(hence there were five betas, to little avail).



 
Guido W.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Guido Winkelmann
On Monday 20 April 2015 11:23:06 Mike Bayer wrote:
On 4/20/15 8:09 AM, Guido Winkelmann wrote:
[...]
 On sqlite, drop_all() seems to fail to get the order of table drops
 right, and consequently runs into a referential integrity error.

If you can post a reproducible issue, that's what I can work with.

I'm afraid the best I can offer right now is the current state of the pyfarm-
master code base.  It's 100% reproducible there, but it's not exactly a 
reduced test case...

There are changes to how tables are sorted in the absence of foreign key
dependency, where this ordering was previously undefined, it is now
determinstic; see
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-aab33
2eedafc8e090f42b89ac7a67e6c.
 On MySQL/PostgreSQL, this line fails:
 
 Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of
 the model used for filtering should be.

this is a known regression and is fixed in 1.0.1:
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.1


if you can confirm with current master that this is fixed I can release
today or tomorrow as this particular regression is fairly severe.

I just tested, the problem is still present in the current master 
(bd61e7a3287079cf742f4df698bfe3628c090522 from github).

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Mike Bayer



On 4/20/15 8:09 AM, Guido Winkelmann wrote:

Hi,

Have there been any non-backwards-compatible changes in SQLAlchemy 1.0 
compared to 0.9.9?
Most behavioral changes are listed out at 
http://docs.sqlalchemy.org/en/rel_1_0/changelog/migration_10.html; I've 
urged everyone to please read through this document.  None of the 
behavioral changes are backwards incompatible at face value, however, 
the nature of SQLAlchemy is necessarily one where there's lots of 
behaviors that applications can find themselves relying upon, and when 
we improve these behaviors, applications which relied upon bugs, 
inconsistencies, or things that just happened to work a certain way can 
break when we make things more consistent or apply definitions to 
behaviors that were previously not defined.


There have also been five beta releases put out. In particular the 
NEVER_SET issue you are receiving is a known regression that is now 
fixed, but unfortunately not enough people were interested in trying out 
any of these five beta releases in order to find this fairly common 
condition, so it is only fixed for 1.0.1.



If you compare the builds on sqlite with those MySQL/PostgreSQL, you 
will see there are two different, seemingly unrelated things going wrong:


On sqlite, drop_all() seems to fail to get the order of table drops 
right, and consequently runs into a referential integrity error.
If you can post a reproducible issue, that's what I can work with.
There are changes to how tables are sorted in the absence of foreign key 
dependency, where this ordering was previously undefined, it is now 
determinstic; see 
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-aab332eedafc8e090f42b89ac7a67e6c. 






On MySQL/PostgreSQL, this line fails:

Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of 
the model used for filtering should be.


this is a known regression and is fixed in 1.0.1: 
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.1 



if you can confirm with current master that this is fixed I can release 
today or tomorrow as this particular regression is fairly severe.





--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Mike Bayer



On 4/20/15 8:09 AM, Guido Winkelmann wrote:

Hi,

Have there been any non-backwards-compatible changes in SQLAlchemy 1.0 
compared to 0.9.9?


We are seeing a lot of sudden breakage in our unit tests when 
switching to SQLAlchemy 1.0 from 0.9.9. Tests that worked fine before 
suddenly fail across the board.


Here's a an example of a test build that suddenly failed on 1.0:

https://travis-ci.org/pyfarm/pyfarm-master/builds/58860924

If you compare the builds on sqlite with those MySQL/PostgreSQL, you 
will see there are two different, seemingly unrelated things going wrong:


On sqlite, drop_all() seems to fail to get the order of table drops 
right, and consequently runs into a referential integrity error.


On MySQL/PostgreSQL, this line fails:

association = TaskTaskLogAssociation.query.filter_by(task=task, 
log=task_log, attempt=attempt).first()


In this context, log is a relationship in the 
model TaskTaskLogAssociation to model TaskLog. task_log is an object 
of type TaskLog, but one that has never been written to the database 
and has no set id. That leads to this error message in the logs:


nose.proxy.ProgrammingError: (psycopg2.ProgrammingError) function 
symbol(unknown) does not exist

LINE 3: ...72015052936_task_log_associations.attempt = 1 AND symbol('NE...
   ^
HINT:  No function matches the given name and argument types. You 
might need to add explicit type casts.
 [SQL: 'SELECT test29172015052936_task_log_associations.task_log_id AS 
test29172015052936_task_log_associations_task_log_id, 
test29172015052936_task_log_associations.task_id AS 
test29172015052936_task_log_associations_task_id, 
test29172015052936_task_log_associations.attempt AS 
test29172015052936_task_log_associations_attempt, 
test29172015052936_task_log_associations.state AS 
test29172015052936_task_log_associations_state \nFROM 
test29172015052936_task_log_associations \nWHERE 
test29172015052936_task_log_associations.attempt = %(attempt_1)s AND 
%(param_1)s = test29172015052936_task_log_associations.task_log_id AND 
%(param_2)s = test29172015052936_task_log_associations.task_id \n 
LIMIT %(param_3)s'] [parameters: {'param_1': symbol('NEVER_SET'), 
'attempt_1': 1, 'param_2': 1, 'param_3': 1}]


Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of 
the model used for filtering should be.


It may be a bit questionable to filter by a model that doesn't even 
exist in the database, but, again, this used to work fine in 0.9.9.


This is odd. What was working fine in 0.9.9 doing exactly?   Was it 
coming out with NULL = 
test29172015052936_task_log_associations.task_log_id ?  Looking in 
0.9, there is no logic in this case to convert the = to IS in this 
case as the parameter from the object is not evaluated til after the 
query is generated.This query will *always* return False, because 
NULL cannot be compared with =.


I guess that's whats desired here, that the query returns nothing, but 
this is a lot like the idea of x IN (), e.g. it's useless to emit this 
query, and it relies upon kind of a weird quirk of SQL. I almost wonder 
if this should emit a warning.  Because if we do eventually make it so 
that IS NULL comes out, the results can change for more complex 
relationships that explicitly want to compare some columns to NULL.













Regards,
  Guido W.
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] The use of SQLAlchemy for a long term project

2015-04-20 Thread Van Klaveren, Brian N.
Thanks for the detailed response. I didn't think to look to Red Hat's to see if 
they backported security fixes, so that's good to know.

As for the undefined behavior with respect to less-than idiomatic programming, 
I think that's something we'll definitely need to keep in mind and hopefully be 
able to enforce with code reviews.

I've personally been wanting to use SQLAlchemy, but it's important we 
understand implications for decisions like this when the lifetime of code is 
guaranteed to surpass a decade. I'm pretty sure we'll end up using SQLAlchemy 
for the long term.

Thanks again,
Brian


On Apr 18, 2015, at 2:47 PM, Mike Bayer 
mike...@zzzcomputing.commailto:mike...@zzzcomputing.com wrote:



On 4/17/15 6:58 PM, Van Klaveren, Brian N. wrote:
Hi,

I'm investigating the use and dependency on SQLAlchemy for a long-term 
astronomy project. Given Version 1.0 just came out, I've got a few questions 
about it.

1. It seems SQLAlchemy generally EOLs versions after about two releases/years. 
Is this an official policy? Is this to continue with version 1.0 as well? Or is 
it possible 1.0 might be a something of a long-term release.
2. While well documented and typically minimal, SQLAlchemy does have occasional 
API and behavioral changes to be aware of between versions. Is the 1.0 API more 
likely to be stable on the time frame of ~4 years?

Put another way, would you expect that it should be easier to migrate from 
version 1.0 to the 1.4 (or whatever the current version is) of SQLAlchemy in 
five years than it would be to migrate from 0.6 to 1.0 today.

I know these questions are often hard to answer with any certainty, but these 
sorts of projects typically outlive the software they are built on and are 
often underfunded as far as software maintenance goes, so we try to plan 
accordingly.

(Of course, some people just give up and through everything in VMs behind 
firewalls)
Well the vast majority of bugs that are fixed, like 99% of them, impact only 
new development, that is, they only have a positive impact someone who is 
writing new code, using new features of their database backend, or otherwise 
attempting to do something new; they typically only serve to raise risk and 
decrease stability of code that is not under active development and is 
stabilized on older versions of software.

These kinds of issues mean that some way of structuring tables, mapped classes, 
core SQL or DDL objects, ORM queries, or calls to a Session produce some 
unexpected result, but virtually always, this unexpected result is consistent 
and predictable.   An application that is sitting on 0.5 or 0.6 and is running 
perfectly fine, because it hasn't hit any of these issues, or quite often 
because it has and is working around them (or even relying upon their behavior) 
would not benefit at all from these kinds of fixes being backported, but would 
instead have a greater chance of hitting a regression or a change in 
assumptions if lots of bugfixes were being backported from two or three major 
versions forward.

So it's not like we don't backport issues three or four years back because it's 
too much trouble, it's because these backports wouldn't benefit anyone and they 
would only serve to wreak havoc with old and less maintained applications when 
some small new feature or improvement in behavioral consistency breaks some 
assumption made by that application.

As far as issues that are more appropriate for backporting, which would be 
security fixes and stability enhancements, we almost never have issues like 
that; the issues we have regarding stability, like memory leaks and race 
conditions, again typically occur in conjunction with a user application doing 
something strange and unexpected (e.g. new development), and as far as security 
issues the only issue we ever had like that even resembled a security issue was 
issue 2116 involving limit/offset integers not being escaped, which was 
backported from 0.7 to 0.6.  Users who actually needed enterprise-level 
longevity who happened to be using for example the Red Hat package could see 
the backport for this issue backported all the way to their 0.5 and 0.3 
packages.  But presence of security/memory leak/stability issues in modern 
versions is extremely rare, and we generally only see new issues involving 
memory or stability as a result of new features (e.g. regressions).

There's also the class of issues that involve performance enhancements.   Some 
of these features would arguably be appropriate to backport more than several 
major versions, but again they are often the result of significant internal 
refactorings and definitely would raise risk for an older application not 
undergoing active development.   An older application that wants to take 
advantage of newer performance features would be better off going through the 
upgrade process than risking running on top of a library that is a hybrid of 
very old code and backported newer approaches, which will see