Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-22 Thread Guido Winkelmann
On Wednesday 22 April 2015 14:17:02 Mike Bayer wrote:
On 4/22/15 12:45 AM, Oliver Palmer wrote:
 We're using a Flask extension to work with sqlalchemy called
 flask-sqlalchemy.  The engine is usually not directly exposed but echo
 can be enabled using a configuration var
 https://github.com/pyfarm/pyfarm-master/commit/5d0abc03273f0fcce3c7d2cf44e
 f8981dd31aa41 which should have the same impact in terms of logging the
 sql statements.  This
 time around I only ran the tests for one module because we hit
 Travis's 4MB log limit in a couple of seconds when running all the
 tests at once.  I can run the full test suite locally and upload that
 log output somewhere if you need it.
 
 Otherwise, here's the failures when using

 3e80d628bd133d0fd0687e35b8d13abd1d31d6df (search for 'IntegrityError'):
can you please try version 39978060b0d81bd470aade97e608.

All issues should be resolved at this point, please let me know ASAP,
thanks!

I just tested, all our tests pass now.  Thanks again!

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-21 Thread Guido Winkelmann
On Monday 20 April 2015 13:24:49 Michael Bayer wrote:
 On Apr 20, 2015, at 12:56 PM, Guido Winkelmann guido@ambient-
entertainment.de wrote:
 On Monday 20 April 2015 11:23:06 Mike Bayer wrote:
 
 On 4/20/15 8:09 AM, Guido Winkelmann wrote:
 On MySQL/PostgreSQL, this line fails:
 
 Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of
 the model used for filtering should be.
 
 this is a known regression and is fixed in 1.0.1:
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.
 0.1
 
 
 if you can confirm with current master that this is fixed I can release
 today or tomorrow as this particular regression is fairly severe.
 
 I just tested, the problem is still present in the current master
 (bd61e7a3287079cf742f4df698bfe3628c090522 from github).

Oh, read your text, while you haven't provided a code sample it sounds like
you are possibly saying filter(Foo.relationship == some_transient_object)
and expecting that all the None values come out.   Yes?   That is just the
kind of example of just happened to work I'm talking about.   Can you
confirm this is what you are doing please ?

Yes, basically.

Hopefully can find a fix for
that.There is an entry detailing the behavioral change here but these
effects were unanticipated (hence there were five betas, to little avail).

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-21 Thread Guido Winkelmann
On Tuesday 21 April 2015 09:43:51 Mike Bayer wrote:
On 4/21/15 6:45 AM, Guido Winkelmann wrote:
 On Monday 20 April 2015 21:57:40 Oliver Palmer wrote:
 [...]
 
 So I got to thinking about what we're doing differently with sqlite and
 
 this bit of code comes to mind:
 # sqlite specific configuration for development
 
 if db.engine.name == sqlite:
 @event.listens_for(Engine, connect)
 
 def set_sqlite_pragma(dbapi_connection, connection_record):
 cursor = dbapi_connection.cursor()
 cursor.execute(PRAGMA foreign_keys=ON)
 cursor.execute(PRAGMA synchronous=OFF)
 cursor.execute(PRAGMA journal_mode=MEMORY)
 cursor.close()
 
 If I comment the above out in our application.py
 https://github.com/pyfarm/pyfarm-master/blob/f22912cd7d89b93c146801fd1575
 ff0 6f4883724/pyfarm/master/application.py#L208 module the second
 nosetests example above works without issues.
 
 This looks to me like you are fixing the problem by just not enabling
 foreign key support in sqlite.  Since the problem was a foreign key
 violation, telling sqlite to not bother enforcing those will make it so we
 don't see the problem in the tests anymore, but it doesn't fix whatever is
 going on here...
what is needed here is actual logging of the tables as they are being
dropped.  The claim here is that the ordering of the tables is wrong in
1.0.0.  So can we please see the full list of DROP statements logged for
both the 0.9.9 version and the 1.0.0 version? 

Can you help me with that?  I don't know how to make SQLAlchemy log all its 
DROP statements.

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-21 Thread Guido Winkelmann
On Monday 20 April 2015 19:22:36 Mike Bayer wrote:
On 4/20/15 12:56 PM, Guido Winkelmann wrote:
 I just tested, the problem is still present in the current master
 (bd61e7a3287079cf742f4df698bfe3628c090522 from github). Guido W.

can you please try current master at least as of
a3af638e1a95d42075e25e874746, thanks.

I just updated to 3e80d628bd133d0fd0687e35b8d13abd1d31d6df (which comes after 
a3af638e1a95d42075e25e874746).

The problem has gone away with those changes.  Thanks for looking into this!

(We're still going to merge the patch where we drop the query entirely if the 
object we compare the relationship to is transient...)

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-21 Thread Guido Winkelmann
On Monday 20 April 2015 21:57:40 Oliver Palmer wrote:
[...]
So I got to thinking about what we're doing differently with sqlite and
this bit of code comes to mind:


# sqlite specific configuration for development
if db.engine.name == sqlite:
@event.listens_for(Engine, connect)
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute(PRAGMA foreign_keys=ON)
cursor.execute(PRAGMA synchronous=OFF)
cursor.execute(PRAGMA journal_mode=MEMORY)
cursor.close()


If I comment the above out in our application.py
https://github.com/pyfarm/pyfarm-master/blob/f22912cd7d89b93c146801fd1575ff0
6f4883724/pyfarm/master/application.py#L208 module the second nosetests
example above works without issues.

This looks to me like you are fixing the problem by just not enabling 
foreign key support in sqlite.  Since the problem was a foreign key violation, 
telling sqlite to not bother enforcing those will make it so we don't see the 
problem in the tests anymore, but it doesn't fix whatever is going on here...

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Guido Winkelmann
Hi,

Have there been any non-backwards-compatible changes in SQLAlchemy 1.0 
compared to 0.9.9?

We are seeing a lot of sudden breakage in our unit tests when switching to 
SQLAlchemy 1.0 from 0.9.9. Tests that worked fine before suddenly fail 
across the board.

Here's a an example of a test build that suddenly failed on 1.0:

https://travis-ci.org/pyfarm/pyfarm-master/builds/58860924

If you compare the builds on sqlite with those MySQL/PostgreSQL, you will 
see there are two different, seemingly unrelated things going wrong:

On sqlite, drop_all() seems to fail to get the order of table drops right, 
and consequently runs into a referential integrity error.

On MySQL/PostgreSQL, this line fails:

association = TaskTaskLogAssociation.query.filter_by(task=task, 
log=task_log, attempt=attempt).first()

In this context, log is a relationship in the 
model TaskTaskLogAssociation to model TaskLog. task_log is an object of 
type TaskLog, but one that has never been written to the database and has 
no set id. That leads to this error message in the logs:

nose.proxy.ProgrammingError: (psycopg2.ProgrammingError) function 
symbol(unknown) does not exist
LINE 3: ...72015052936_task_log_associations.attempt = 1 AND symbol('NE...
 ^
HINT:  No function matches the given name and argument types. You might 
need to add explicit type casts.
 [SQL: 'SELECT test29172015052936_task_log_associations.task_log_id AS 
test29172015052936_task_log_associations_task_log_id, 
test29172015052936_task_log_associations.task_id AS 
test29172015052936_task_log_associations_task_id, 
test29172015052936_task_log_associations.attempt AS 
test29172015052936_task_log_associations_attempt, 
test29172015052936_task_log_associations.state AS 
test29172015052936_task_log_associations_state \nFROM 
test29172015052936_task_log_associations \nWHERE 
test29172015052936_task_log_associations.attempt = %(attempt_1)s AND 
%(param_1)s = test29172015052936_task_log_associations.task_log_id AND 
%(param_2)s = test29172015052936_task_log_associations.task_id \n LIMIT 
%(param_3)s'] [parameters: {'param_1': symbol('NEVER_SET'), 'attempt_1': 1, 
'param_2': 1, 'param_3': 1}]

Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of the 
model used for filtering should be.

It may be a bit questionable to filter by a model that doesn't even exist 
in the database, but, again, this used to work fine in 0.9.9.

Regards,
  Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Guido Winkelmann
On Monday 20 April 2015 11:23:06 Mike Bayer wrote:
On 4/20/15 8:09 AM, Guido Winkelmann wrote:
[...]
 On sqlite, drop_all() seems to fail to get the order of table drops
 right, and consequently runs into a referential integrity error.

If you can post a reproducible issue, that's what I can work with.

I'm afraid the best I can offer right now is the current state of the pyfarm-
master code base.  It's 100% reproducible there, but it's not exactly a 
reduced test case...

There are changes to how tables are sorted in the absence of foreign key
dependency, where this ordering was previously undefined, it is now
determinstic; see
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-aab33
2eedafc8e090f42b89ac7a67e6c.
 On MySQL/PostgreSQL, this line fails:
 
 Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of
 the model used for filtering should be.

this is a known regression and is fixed in 1.0.1:
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.1


if you can confirm with current master that this is fixed I can release
today or tomorrow as this particular regression is fairly severe.

I just tested, the problem is still present in the current master 
(bd61e7a3287079cf742f4df698bfe3628c090522 from github).

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Add an integer column as seconds to a datetime column in filter

2015-01-16 Thread Guido Winkelmann
Does this only work on PostgreSQL? I need this to work on PostgreSQL, MySQL 
and SQLite.

Am Donnerstag, 15. Januar 2015 20:15:43 UTC+1 schrieb Michael Bayer:

 you should be able to cast as interval: 


 from sqlalchemy.dialects.postgresql import INTERVAL 
 from sqlalchemy import cast 

 select([test]).where(test.c.finished_on + cast(‘1 sec’, INTERVAL) * 
 test.c.wait  func.NOW()) 


 Guido Winkelmann gu...@ambient-entertainment.de javascript: wrote: 

  Hi, 
  
  How can I add one column containing integer values (NULL allowed), 
 treating 
  them as seconds, to another column from the same table containing 
 datetime 
  values and compare the result to the current time? 
  
  I have a table with a datetime column finished_on and in int column 
 wait. 
  Once wait seconds have passed since finished_on, I need to do something 
 with 
  that row. In postgres, the query to find these rows would look something 
 like 
  this: 
  
  SELECT * FROM test WHERE finished_on + INTERVAL '1 sec' * wait  NOW(); 
  
  How can I make a query like that in sqlalchemy? 
  
  I have tried googling the problem, but the only solutions I found where 
 those 
  where you already have the interval value available in the calling 
 python 
  code. This doesn't help me here, since the interval can be different for 
 each 
  row. 
  
  Regards, 
  
  Guido W. 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to sqlal...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Add an integer column as seconds to a datetime column in filter

2015-01-15 Thread Guido Winkelmann
Hi,

How can I add one column containing integer values (NULL allowed), treating 
them as seconds, to another column from the same table containing datetime 
values and compare the result to the current time?

I have a table with a datetime column finished_on and in int column wait. 
Once wait seconds have passed since finished_on, I need to do something with 
that row. In postgres, the query to find these rows would look something like 
this:

SELECT * FROM test WHERE finished_on + INTERVAL '1 sec' * wait  NOW();

How can I make a query like that in sqlalchemy?

I have tried googling the problem, but the only solutions I found where those 
where you already have the interval value available in the calling python 
code. This doesn't help me here, since the interval can be different for each 
row.

Regards,

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to retry failed transactions?

2014-12-01 Thread Guido Winkelmann
On Friday 28 November 2014 16:38:47 Michael Bayer wrote:
 On Nov 28, 2014, at 9:28 AM, Guido Winkelmann
 gu...@ambient-entertainment.de wrote:
[...]
 The problem is, in SQLAlchemy, at least with a PostgreSQL+psycopg2 backend,
 the exception I get when that happens is InvalidRequestError. This is
 not very helpful, as I cannot tell the difference between a genuinely
 invalid request and a mere transaction failure like this. I also don't
 know if the exception will be different on different backends.

then that means you’re doing something incorrectly.   The actual message of
this exception as well as a stack trace and concise and self-contained
examples of code would allow us to have some idea what this might be. 

This is the full text of the exception that I'm getting:

Task pyfarm.scheduler.tasks.poll_agents[3c2f6fa8-fa29-481a-baa1-1e82b6e14b04] 
raised unexpected: InvalidRequestError(This Session's transaction has been 
rolled back due to a previous exception during flush. To begin a new 
transaction with this Session, first issue Session.rollback(). Original 
exception was: (TransactionRollbackError) could not serialize access due to 
concurrent update\n 'UPDATE jobs SET state=%(state)s, 
time_started=%(time_started)s WHERE jobs.id = %(jobs_id)s' {'state': 105, 
'time_started': datetime.datetime(2014, 11, 26, 17, 8, 56, 651872), 'jobs_id': 
2444L},)
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/celery/app/trace.py, line 240, in 
trace_task
R = retval = fun(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/celery/app/trace.py, line 437, in 
__protected_call__
return self.run(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/pyfarm/scheduler/tasks.py, line 345, 
in poll_agents
db.session.commit()
  File /usr/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py, line 149, 
in do
return getattr(self.registry(), name)(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/sqlalchemy/orm/session.py, line 768, 
in commit
self.transaction.commit()
  File /usr/lib/python2.7/site-packages/sqlalchemy/orm/session.py, line 368, 
in commit
self._assert_active(prepared_ok=True)
  File /usr/lib/python2.7/site-packages/sqlalchemy/orm/session.py, line 210, 
in _assert_active
% self._rollback_exception
InvalidRequestError: This Session's transaction has been rolled back due to a 
previous exception during flush. To begin a new transaction with this Session, 
first issue Session.rollback(). Original exception was: 
(TransactionRollbackError) could not serialize access due to concurrent update
 'UPDATE jobs SET state=%(state)s, time_started=%(time_started)s WHERE jobs.id 
= %(jobs_id)s' {'state': 105, 'time_started': datetime.datetime(2014, 11, 26, 
17, 8, 56, 651872), 'jobs_id': 2444L}

You're right, there's something going wrong here.

We are using Celery to run some tasks asynchronously (the main application is 
a WSGI web app) and Flask-SQLAlchemy to get a db session object. It turns out 
that in combination, that means that db session objects from Flask-SQLAlchemy 
get reused between tasks running on the same celery worker. This in turn means 
that if one task fails a transaction, all later tasks that happen to run on 
the same celery worker will end up with a db session that's in an invalid 
transaction state.

After I noticed what was going on there, my workaround for that was to just 
unconditionally call rollback() on the db session at the start of every celery 
task. I'm not exactly happy with that solution (it's ugly and counter-
intuitive), but it does work for now.

The original exception looked like this:

Task 
pyfarm.scheduler.tasks.assign_tasks_to_agent[252c5387-948e-4205-9d73-1ac700cebc8e]
 
raised unexpected: DBAPIError('(TransactionRollbackError) could not serialize 
access due to concurrent update\n',)

(Traceback omitted)

This still leaves me confused as to what exception exactly I am to catch. It 
looks like there is some 'TransactionRollbackError' involved somewhere, but it 
seems to get repackaged into a DBAPIError before actually reaching the user...

[...]
 After some experimentation and research, it turned out that the root cause
 here was that PostgreSQL's default transaction isolation level of read
 committed was just not enough in my case, and I would need to increase it
 to serializable”.

that is also suspect.  serializable isolation is very tough to use due to the
high degree of locking and it suggests there is some level of concurrency
here that is probably better accommodated using either a correctly
implemented optimistic approach (e.g. the retry) or if UPDATES are part of
the issue, SELECT..FOR UPDATE can often help to lock rows ahead of time when
updates will be needed.
[...]
feel free to share more specifics as long as they are concise and
self-contained.

Well, as I said in my previous mail, I have a model Job and a model Task with 
a one-to-many relation between jobs and tasks, i.e. every task belongs

[sqlalchemy] How to retry failed transactions?

2014-11-28 Thread Guido Winkelmann
Hi,

What's the best strategy for retrying failed transactions when using 
SQLAlchemy?

Sometimes, transactions fail on commit, through no fault of their own, 
simply because another concurrent but conflicting transaction finished 
slightly before them. The commonly accepted best practice  seems to be that 
you should just retry your transaction – including all client side logic – 
in such cases until it succeeds (or until you run into some timeout).

The problem is, in SQLAlchemy, at least with a PostgreSQL+psycopg2 backend, 
the exception I get when that happens is InvalidRequestError. This is not 
very helpful, as I cannot tell the difference between a genuinely invalid 
request and a mere transaction failure like this. I also don't know if the 
exception will be different on different backends.

The background here is that, lately, I have observed the application I am 
working on (https://github.com/pyfarm) sometimes doing the wrong thing when 
concurrent requests are involved.
(Example: A job has zero or more tasks. When a request comes in to update a 
task's state to done or failed, I need to update the task and then 
check whether the job it belongs to still has any tasks left that are not 
failed or done, and do x if it doesn't. I have lately spotted some jobs 
that have no more active tasks left, but still x was never done for them.)

After some experimentation and research, it turned out that the root cause 
here was that PostgreSQL's default transaction isolation level of read 
committed was just not enough in my case, and I would need to increase it 
to serializable. Doing that, however, led to a rude awakening for me 
regarding handling of failed transactions. They suddenly turned from 
something mostly theoretical, that I know about but could safely ignore in 
practice almost all of the time, into a very real thing that happens a lot 
in practice and that I need to deal with somehow, because otherwise my 
application pretty much ceases to work for all practical purposes.

Regards,

  Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Compound unique constraint with a nullable column

2014-11-18 Thread Guido Winkelmann
On Monday 17 November 2014 14:44:02 Michael Bayer wrote:
 On Nov 17, 2014, at 1:55 PM, Guido Winkelmann
 gu...@ambient-entertainment.de wrote: 
 Am Montag, 17. November 2014 16:25:54 UTC+1 schrieb Michael Bayer:
 On Nov 17, 2014, at 7:37 AM, Guido Winkelmann
 gu...@ambient-entertainment.de javascript: wrote:
 
 Hi,
 
 How can I go about having a cross-platform compatible compound unique
 constraint over two columns where there can be only one record with NULL
 in one column and a given value in the other?
 
 I want something like this:
 
 UniqueConstraint(parent_id, name)
 
 except parent_id is nullable, and I want to disallow creating multiple
 records with parent_id=NULL and the same name. The problem here is that
 some database management systems, PostgreSQL for example, will treat all
 NULL values as not-equal inside a unique constraint, which makes sense
 for a single column unique, but not really for a multi-column one. This
 will allow multiple records with the same name and parent_id=NULL again.
 
 In PostgreSQL, apparently I would need to create a unique index in
 addition to the constraint. I would really like to solve this without
 writing database-specific code, though.
 
 Does anybody have a good solution for that?
 
 I’ve had an issue in this area before and decided just to forego it; I was
 targeting PG and SQL Server, but I don’t remember exactly which NULL
 behavior I was looking for.   per the SQL standard I think PG’s behavior
 is correct. NULL means, “unknown”, so two NULL values are never
 equivalent. 
  If I wanted to add a manual check in Python for that, how would I go about
  that? 
 Could I register an event handler for the before_insert event for this
 model?
 
 What would the event handler have to do to refuse an insert? Throw an
 exception?
you could use a before_insert event, though that’s expensive as you’d need to
query the whole table to check for dupes for every row.

That's not a big deal in this case. This particular table is never going to be 
very big or very busy. It probably won't ever have more than fifty records.

I think organizing the kinds of indexes/constraints that apply to different
backends would be the best approach here.

Backend specific code is exactly what I was trying to avoid here. I might come 
back to that if similar problems come up in more performance-sensitive parts, 
but for now, I think I will just stick with the before_insert event handler:

https://github.com/pyfarm/pyfarm-master/pull/271/files

Regards,

Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Compound unique constraint with a nullable column

2014-11-17 Thread Guido Winkelmann
Hi,

How can I go about having a cross-platform compatible compound unique 
constraint over two columns where there can be only one record with NULL in 
one column and a given value in the other?

I want something like this:

UniqueConstraint(parent_id, name)

except parent_id is nullable, and I want to disallow creating multiple 
records with parent_id=NULL and the same name. The problem here is that 
some database management systems, PostgreSQL for example, will treat all 
NULL values as not-equal inside a unique constraint, which makes sense for 
a single column unique, but not really for a multi-column one. This will 
allow multiple records with the same name and parent_id=NULL again.

In PostgreSQL, apparently I would need to create a unique index in addition 
to the constraint. I would really like to solve this without writing 
database-specific code, though.

Does anybody have a good solution for that?

  Guido

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Compound unique constraint with a nullable column

2014-11-17 Thread Guido Winkelmann

Am Montag, 17. November 2014 16:25:54 UTC+1 schrieb Michael Bayer:


 On Nov 17, 2014, at 7:37 AM, Guido Winkelmann 
 gu...@ambient-entertainment.de javascript: wrote:

 Hi,

 How can I go about having a cross-platform compatible compound unique 
 constraint over two columns where there can be only one record with NULL in 
 one column and a given value in the other?

 I want something like this:

 UniqueConstraint(parent_id, name)

 except parent_id is nullable, and I want to disallow creating multiple 
 records with parent_id=NULL and the same name. The problem here is that 
 some database management systems, PostgreSQL for example, will treat all 
 NULL values as not-equal inside a unique constraint, which makes sense for 
 a single column unique, but not really for a multi-column one. This will 
 allow multiple records with the same name and parent_id=NULL again.

 In PostgreSQL, apparently I would need to create a unique index in 
 addition to the constraint. I would really like to solve this without 
 writing database-specific code, though.

 Does anybody have a good solution for that?


 I’ve had an issue in this area before and decided just to forego it; I was 
 targeting PG and SQL Server, but I don’t remember exactly which NULL 
 behavior I was looking for.   per the SQL standard I think PG’s behavior is 
 correct. NULL means, “unknown”, so two NULL values are never equivalent.


 If I wanted to add a manual check in Python for that, how would I go about 
that?

Could I register an event handler for the before_insert event for this 
model?

What would the event handler have to do to refuse an insert? Throw an 
exception?


   Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Subquery giving single values in the SELECT part of a query

2014-10-22 Thread Guido Winkelmann
Hi,

How can I put a subquery that yields exactly one value inside the select part
of the query without joining the subquery like another table?

I have a table jobs and a table tasks. Every job has got any arbitrary number
(zero or more) of tasks. Tasks can be in one of four states: queued,
running, done and failed. When querying the jobs table, I want to get
the counts of queued, running, done and failed tasks for every job,
and I want to be able to sort the output by any of those counts. The SQL I
want would look like this:

===
SELECT
  jobs_1.id AS jobs_1_id,
  more columns from jobs
  (SELECT count(*) FROM tasks t WHERE t.job_id=jobs_1.id AND t.state IS NULL) 
AS t_queued,
  (SELECT count(*) FROM tasks t WHERE t.job_id=jobs_1.id AND t.state=105) AS 
t_running,
  (SELECT count(*) FROM tasks t WHERE t.job_id=jobs_1.id AND t.state=106) AS 
t_done,
  (SELECT count(*) FROM tasks t WHERE t.job_id=jobs_1.id AND t.state=107) AS 
t_failed
FROM
  jobs AS jobs_1
ORDER BY
  t_running DESC;
==

What I have so far in SQLAlchemy looks like this:

==
class Job(Model):
stuff
tasks_queued = relationship(Task, lazy=dynamic,
primaryjoin=(Task.state == None)  
(Task.job_id == Job.id))
tasks_running = relationship(Task, lazy=dynamic,
 primaryjoin=(Task.state == %s)  (Task.job_id 
== Job.id) % DBWorkState.RUNNING)
tasks_done = relationship(Task, lazy=dynamic,
  primaryjoin=(Task.state == %s)  (Task.job_id == 
Job.id) % DBWorkState.DONE)
tasks_failed = db.relationship(Task, lazy=dynamic,
   primaryjoin=(Task.state == %s)  
(Task.job_id == Job.id) % DBWorkState.FAILED)

a_q = aliased(Task)
a_r = aliased(Task)
a_d = aliased(Task)
a_f = aliased(Task)
a_job = aliased(Job)

jobs_query = db.session.query(a_job,
  func.count( a_q.id.distinct()).label(t_queued),
  func.count(a_r.id.distinct()).label(t_running),
  func.count(a_d.id.distinct()).label(t_done),
  func.count(a_f.id.distinct()).label(t_failed)).\
   outerjoin(a_q, a_job.tasks_queued).\
   outerjoin(a_r, a_job.tasks_running).\
   outerjoin(a_d, a_job.tasks_done).\
   outerjoin(a_f, a_job.tasks_failed).\
   group_by(a_job)
==

(The code in question is part of https://github.com/pyfarm/pyfarm-master)

This does not do quite what I want though. It produces this SQL query:

==
SELECT 
  jobs_1.id AS jobs_1_id,
  more columns from jobs
  count(DISTINCT tasks_1.id) AS t_queued,
  count(DISTINCT tasks_2.id) AS t_running,
  count(DISTINCT tasks_3.id) AS t_done,
  count(DISTINCT tasks_4.id) AS t_failed 
FROM
  jobs AS jobs_1
  LEFT OUTER JOIN tasks AS tasks_1 ON tasks_1.state IS NULL AND tasks_1.job_id 
= jobs_1.id
  LEFT OUTER JOIN tasks AS tasks_2 ON tasks_2.state = 105 AND tasks_2.job_id = 
jobs_1.id
  LEFT OUTER JOIN tasks AS tasks_3 ON tasks_3.state = 106 AND tasks_3.job_id = 
jobs_1.id
  LEFT OUTER JOIN tasks AS tasks_4 ON tasks_4.state = 107 AND tasks_4.job_id = 
jobs_1.id 
WHERE
  jobs_1.state = 105
GROUP BY
  jobs_1.id,
  more columns from jobs
ORDER BY t_running DESC
==

It works, but it causes horrible performance issues in PostgreSQL, to the
point of being unusable.

How can I make SQLAlchemy produce a query that's more like the first SQL query?

Guido

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to tell the cardinality of RelationshipProperty?

2014-02-13 Thread Guido Winkelmann
Hi,

Reading through the docs, I could not find out how to tell the cardinality 
of an object of type sqlalchemy.orm.properties.RelationshipProperty, i.e. 
whether it is one-to-many, many-to-one or many-to-many.  Mostly, I just 
need to know whether a relationship refers to just one object or a 
collection of objects.

Can someone help me with that?

 Guido W:

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] How to tell the cardinality of RelationshipProperty?

2014-02-13 Thread Guido Winkelmann


Am Donnerstag, 13. Februar 2014 15:46:31 UTC+1 schrieb Michael Bayer:


 On Feb 13, 2014, at 6:23 AM, Guido Winkelmann 
 gu...@ambient-entertainment.de javascript: wrote: 

  Hi, 
  
  Reading through the docs, I could not find out how to tell the 
 cardinality of an object of type 
 sqlalchemy.orm.properties.RelationshipProperty, i.e. whether it is 
 one-to-many, many-to-one or many-to-many.  Mostly, I just need to know 
 whether a relationship refers to just one object or a collection of 
 objects. 
  
  Can someone help me with that? 

 So given a class: 

 MyClass 

 and a relationship at the class level: 

 MyClass.some_relationship 

 you get at the RelationshipProperty like this: 

 MyClass.some_relationship.property 


That doesn't work for me.

For some reason, the relationship has the type 
sqlalchemy.orm.dynamic.AppenderBaseQuery, which has no attribute property.

Maybe that's because I'm using flask-sqlalchemy instead of just sqlalchemy 
directly?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] How to tell the cardinality of RelationshipProperty?

2014-02-13 Thread Guido Winkelmann


Am Donnerstag, 13. Februar 2014 18:01:27 UTC+1 schrieb Michael Bayer:


 On Feb 13, 2014, at 11:46 AM, Guido Winkelmann 
 gu...@ambient-entertainment.de javascript: wrote:



 Am Donnerstag, 13. Februar 2014 15:46:31 UTC+1 schrieb Michael Bayer:


 On Feb 13, 2014, at 6:23 AM, Guido Winkelmann 
 gu...@ambient-entertainment.de wrote: 

  Hi, 
  
  Reading through the docs, I could not find out how to tell the 
 cardinality of an object of type 
 sqlalchemy.orm.properties.RelationshipProperty, i.e. whether it is 
 one-to-many, many-to-one or many-to-many.  Mostly, I just need to know 
 whether a relationship refers to just one object or a collection of 
 objects. 
  
  Can someone help me with that? 

 So given a class: 

 MyClass 

 and a relationship at the class level: 

 MyClass.some_relationship 

 you get at the RelationshipProperty like this: 

 MyClass.some_relationship.property 


 That doesn't work for me.

 For some reason, the relationship has the type 
 sqlalchemy.orm.dynamic.AppenderBaseQuery, which has no attribute property.

 Maybe that's because I'm using flask-sqlalchemy instead of just sqlalchemy 
 directly?


 “MyClass” is a class, not an object:

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class A(Base):
 __tablename__ = 'a'

 id = Column(Integer, primary_key=True)
 bs = relationship(B, lazy=dynamic)


 class B(Base):
 __tablename__ = 'b'

 id = Column(Integer, primary_key=True)
 a_id = Column(Integer, ForeignKey('a.id'))

 configure_mappers()

 print A.bs.property


Thanks, that helped!

   Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.