Re: [sqlalchemy] mapper on a temporary class

2021-01-27 Thread Kent Bower
Excellent.  As always, thanks very much for your time and answers (let
alone awesome software)!


On Wed, Jan 27, 2021 at 1:25 PM Mike Bayer  wrote:

>
>
> On Tue, Jan 26, 2021, at 9:01 PM, Kent Bower wrote:
>
> Thanks a ton for your responses.
>
> Do all the normal columns of an aliased class need to match the ad-hoc
> select to which I map the alias?
>
>
> handwavy handwavy sort of yes, sort of no?there needs to be a 1-1
> correspondence of columns, yes, aliasedclass can't define additional
> attributes that arent there normally.  as to how they are lined up, there's
> an option to do it by name in the aliased() function adapt_on_names.
> otherwise it's expected that your select() is derived from the Table
> objects that are originally mapped.
>
>
>
> > oh if the class doesn't have a mapper, then defintiely, just make
> ad-hoc subclasses of it and map to those.vastly easier that way.
>
> Mapping to a subclass would be different from mapping directly to an
> ad-hoc class?
>
>
> if the class itself is ad-hoc, then you're fine, you can make new classes
> with mappers all you want and they are GCed if you lose them (assuming no
> relationships involving long-lived classes with backrefs), yes.
>
>
>
>
>
>
>  (Mostly I’m concerned whether the very act of adding a mapper to a class
> will keep the reference and prevent its garbage collection or in some other
> way modify the “main” mappers, especially if this is done via a thread.
> Like, would that modify the compiled mappers for the entire process...
> these are the things running through my head.)
>
>
> the configure_mappers() function these days is threadsafe, it uses a
> mutex, so you are "good" there but there's a mutex involved if you are
> looking for high levels of concurrency.   if your ad-hoc classes are not
> subclasses of any long lived classes and if you are careful to limit
> relationships to only point to long lived classes and not have any
> backrefs, it should be OK.  I'd test it though :)set up a weakref.ref()
> to your ad-hoc mapper objects and make sure those ref objects get their
> callback hook invoked.
>
>
>
>
>
>
> On Tue, Jan 26, 2021 at 8:18 PM Mike Bayer 
> wrote:
>
>
>
>
> On Tue, Jan 26, 2021, at 7:31 PM, Kent Bower wrote:
>
> I should have given these details from the get-go:  the use case is a
> specialized select() (dynamically built) which would be extremely
> convenient to map relationships against for convenience in subquery
> loading, etc. So, the class would not already have a mapper.  Can I pass
> non_primary=True anyway, or won’t this work?
>
>
> oh if the class doesn't have a mapper, then defintiely, just make ad-hoc
> subclasses of it and map to those.vastly easier that way.
>
>
>
>
>
>
> On Tue, Jan 26, 2021 at 6:18 PM Mike Bayer 
> wrote:
>
>
>
>
> On Tue, Jan 26, 2021, at 3:16 PM, Kent wrote:
>
> Question: if I add a mapper to a class that is only needed temporarily,
> does using the mapper compile it along side my "normal" mappers such that
> I'll leak memory when I mean for the class to be garbage collected?
>
> Put another way, can I add a mapper to a class that doesn't influence my
> "main mappers" and gets thrown away when the class is thrown away or is
> that not possible?
>
>
> Assuming the class already has a mapper, and this is an additional mapper
> that maps the class in a different way, this feature is called a
> "non-primary mapper" and requires that you pass the non_primary=True flag.
>   If you are using non-primary mappers, then yes they get garbage collected.
>
> This is also a deprecated feature that won't be in SQLAlchemy 2.0.The
> only use case we can identify for "non-primary mappers" is when you want to
> create a relationship() to one, and for that use case we now support the
> "Relationship to AliasedClass" pattern.
>
> For anything else involving retrieving a class from an alternate
> selectable of some kind, you should be able to use AliasedClass for, e.g.
> sqlalchemy.orm.aliased(), so I would recommend using that instead unless
> you are on a very old version of SQLAlchemy.
>
>
>
>
>
>
>
>
> Thanks in advance,
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe 

Re: [sqlalchemy] mapper on a temporary class

2021-01-26 Thread Kent Bower
Thanks a ton for your responses.

Do all the normal columns of an aliased class need to match the ad-hoc
select to which I map the alias?

> oh if the class doesn't have a mapper, then defintiely, just make ad-hoc
subclasses of it and map to those.vastly easier that way.

Mapping to a subclass would be different from mapping directly to an ad-hoc
class?  (Mostly I’m concerned whether the very act of adding a mapper to a
class will keep the reference and prevent its garbage collection or in some
other way modify the “main” mappers, especially if this is done via a
thread. Like, would that modify the compiled mappers for the entire
process... these are the things running through my head.)


On Tue, Jan 26, 2021 at 8:18 PM Mike Bayer  wrote:

>
>
> On Tue, Jan 26, 2021, at 7:31 PM, Kent Bower wrote:
>
> I should have given these details from the get-go:  the use case is a
> specialized select() (dynamically built) which would be extremely
> convenient to map relationships against for convenience in subquery
> loading, etc. So, the class would not already have a mapper.  Can I pass
> non_primary=True anyway, or won’t this work?
>
>
> oh if the class doesn't have a mapper, then defintiely, just make ad-hoc
> subclasses of it and map to those.vastly easier that way.
>
>
>
>
>
> On Tue, Jan 26, 2021 at 6:18 PM Mike Bayer 
> wrote:
>
>
>
>
> On Tue, Jan 26, 2021, at 3:16 PM, Kent wrote:
>
> Question: if I add a mapper to a class that is only needed temporarily,
> does using the mapper compile it along side my "normal" mappers such that
> I'll leak memory when I mean for the class to be garbage collected?
>
> Put another way, can I add a mapper to a class that doesn't influence my
> "main mappers" and gets thrown away when the class is thrown away or is
> that not possible?
>
>
> Assuming the class already has a mapper, and this is an additional mapper
> that maps the class in a different way, this feature is called a
> "non-primary mapper" and requires that you pass the non_primary=True flag.
>   If you are using non-primary mappers, then yes they get garbage collected.
>
> This is also a deprecated feature that won't be in SQLAlchemy 2.0.The
> only use case we can identify for "non-primary mappers" is when you want to
> create a relationship() to one, and for that use case we now support the
> "Relationship to AliasedClass" pattern.
>
> For anything else involving retrieving a class from an alternate
> selectable of some kind, you should be able to use AliasedClass for, e.g.
> sqlalchemy.orm.aliased(), so I would recommend using that instead unless
> you are on a very old version of SQLAlchemy.
>
>
>
>
>
>
>
>
> Thanks in advance,
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/df03ce24-3986-450c-8c3a-a020d5d6adccn%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/df03ce24-3986-450c-8c3a-a020d5d6adccn%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/IAqOQFpiB20/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/c8f7f84e-af90-450c-9289-ed84fc6996a9%40www.fastmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/c8f7f84e-af90-450c-9289-ed84fc6996a9%40www.fastmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Comple

Re: [sqlalchemy] mapper on a temporary class

2021-01-26 Thread Kent Bower
I should have given these details from the get-go:  the use case is a
specialized select() (dynamically built) which would be extremely
convenient to map relationships against for convenience in subquery
loading, etc. So, the class would not already have a mapper.  Can I pass
non_primary=True anyway, or won’t this work?


On Tue, Jan 26, 2021 at 6:18 PM Mike Bayer  wrote:

>
>
> On Tue, Jan 26, 2021, at 3:16 PM, Kent wrote:
>
> Question: if I add a mapper to a class that is only needed temporarily,
> does using the mapper compile it along side my "normal" mappers such that
> I'll leak memory when I mean for the class to be garbage collected?
>
> Put another way, can I add a mapper to a class that doesn't influence my
> "main mappers" and gets thrown away when the class is thrown away or is
> that not possible?
>
>
> Assuming the class already has a mapper, and this is an additional mapper
> that maps the class in a different way, this feature is called a
> "non-primary mapper" and requires that you pass the non_primary=True flag.
>   If you are using non-primary mappers, then yes they get garbage collected.
>
> This is also a deprecated feature that won't be in SQLAlchemy 2.0.The
> only use case we can identify for "non-primary mappers" is when you want to
> create a relationship() to one, and for that use case we now support the
> "Relationship to AliasedClass" pattern.
>
> For anything else involving retrieving a class from an alternate
> selectable of some kind, you should be able to use AliasedClass for, e.g.
> sqlalchemy.orm.aliased(), so I would recommend using that instead unless
> you are on a very old version of SQLAlchemy.
>
>
>
>
>
>
>
>
> Thanks in advance,
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/df03ce24-3986-450c-8c3a-a020d5d6adccn%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/df03ce24-3986-450c-8c3a-a020d5d6adccn%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/IAqOQFpiB20/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/c8f7f84e-af90-450c-9289-ed84fc6996a9%40www.fastmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/c8f7f84e-af90-450c-9289-ed84fc6996a9%40www.fastmail.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CANnF6aEAcuCRJrAQDWYrDGVmv5k7OuXNC_AXgrqDO00L3j%3D%2BAg%40mail.gmail.com.


[sqlalchemy] mapper on a temporary class

2021-01-26 Thread Kent
Question: if I add a mapper to a class that is only needed temporarily, 
does using the mapper compile it along side my "normal" mappers such that 
I'll leak memory when I mean for the class to be garbage collected?

Put another way, can I add a mapper to a class that doesn't influence my 
"main mappers" and gets thrown away when the class is thrown away or is 
that not possible?

Thanks in advance,
Kent

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/df03ce24-3986-450c-8c3a-a020d5d6adccn%40googlegroups.com.


Re: [sqlalchemy] connection close() questions

2020-05-14 Thread Kent Bower
Returned to pool in rolled back state now, thanks.

However, the script I sent in this post now hits the "SAWarning: Reset
agent is not active.  This should not occur unless there was already a
connectivity error in progress." on the conn.close() call.

Did you expect that because my usage pattern is "illegal" so to speak?
(The Warning isn't quite accurate regarding "... unless there was already a
connectivity error in progress ")


On Wed, May 13, 2020 at 1:21 PM Mike Bayer  wrote:

> this is getting released today in any case so, just look for any more
> warnings or conditions like this.  the most important part is getting the
> test coverage in so as I refactor for 1.4 / 2.0 the behavioral contract is
> maintained.  thanks!
>
>
>
> On Wed, May 13, 2020, at 1:16 PM, Kent Bower wrote:
>
> Very good, will do when I find time.
>
> Thank you!
>
>
> On Wed, May 13, 2020 at 1:07 PM Mike Bayer 
> wrote:
>
>
> feel free to test the patch at:
>
> https://gerrit.sqlalchemy.org/c/sqlalchemy/sqlalchemy/+/1965
>
> this patch includes that if the transaction state at the engine level gets
> screwed up, the pool will warn and still make sure it does a real
> rollback.you should not see this warning however.
>
> in 2.0, the whole "reset" logic is simplified so that none of this
> complexity will be there.
>
>
>
> On Wed, May 13, 2020, at 11:59 AM, Kent Bower wrote:
>
> LOL, you're welcome, I'm such a great tester, aren't I?
>
> Anyway, everything after 0.9.2 behaved this way.  0.9.1 did a rollback.
>
> On Wed, May 13, 2020 at 11:54 AM Mike Bayer 
> wrote:
>
>
> nevermind, you've managed to find a case that trips it up for the
> connection pool
>
> release today
>
>
>
> On Wed, May 13, 2020, at 11:51 AM, Mike Bayer wrote:
>
>
>
> On Wed, May 13, 2020, at 11:39 AM, Kent Bower wrote:
>
> In this script, conn.close() does *not *call rollback on the
> transaction.  It isn't just a logging issue as I've verified from the
> database that the session was not rolled back.
>
>
> I can confirm that in master only where things have changed
> dramatically.   Should not be the case for any released version, please
> confirm
>
>
>
> On Wed, May 13, 2020 at 11:31 AM Mike Bayer 
> wrote:
>
>
> Haven't looked deeply but so far what you need to know is that
> conn.close() *ALWAYS* rolls back the transaction, just not at the Engine
> level, it's at the connection pool level so you won't see it when logging /
> event hooking on the Engine.  turn on echo_pool and you will see this, in
> modern versions:
>
> 2020-05-13 11:25:45,106 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> being returned to pool
> 2020-05-13 11:25:45,107 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> rollback-on-return, via agent
>
> your DBSession is not going to close the connection because you have it
> bound directly to that connection, rather than to the engine, so it assumes
> it is participating in a larger transaction.   1.4 does amend this behavior
> to be more clear cut as we are doing away with the "nested" behaviors of
> Connection.  So yes I would not be relying upon DBSession.close() as a
> means of transaction control if the session is bound to a connection
> directly.  If the session is bound to a connection I would advise ensuring
> that connection is in a transaction on the outside that you are managing.
>
>
>
>
>
>
>
> On Wed, May 13, 2020, at 10:51 AM, Kent wrote:
>
> Mike, et al.,
>
> I've got some questions about closing connections.  I suspect my framework
> may be at fault, but there is potentially a sqlalchemy issue here as well.
>
> See attached script with nested transaction and explicit
> connection.close().
>
> Things are even more complex because versions have handled this
> differently in the past:
>
>- on rel_0_9_1 and ealier, the conn.close() *always *actually emitted
>a DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested()
>now prevents the DBAPI ROLLBACK call, even though the close() is on the
>connection itself.  I'm not sure if that was an intended change, but it
>seems .close() on a connection should always cause ROLLBACK, no?
>- rel_1_3_9 and earlier this code
>raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as
>it invokes the registered 'rollback' event with an already-closed
>connection, but on current master (1.4.0b1) there is no exception since a
>rollback isn't attempted, leaving the db connection in idle transaction.
>
>
> On all vers

Re: [sqlalchemy] connection close() questions

2020-05-13 Thread Kent Bower
Very good, will do when I find time.

Thank you!


On Wed, May 13, 2020 at 1:07 PM Mike Bayer  wrote:

> feel free to test the patch at:
>
> https://gerrit.sqlalchemy.org/c/sqlalchemy/sqlalchemy/+/1965
>
> this patch includes that if the transaction state at the engine level gets
> screwed up, the pool will warn and still make sure it does a real
> rollback.you should not see this warning however.
>
> in 2.0, the whole "reset" logic is simplified so that none of this
> complexity will be there.
>
>
>
> On Wed, May 13, 2020, at 11:59 AM, Kent Bower wrote:
>
> LOL, you're welcome, I'm such a great tester, aren't I?
>
> Anyway, everything after 0.9.2 behaved this way.  0.9.1 did a rollback.
>
> On Wed, May 13, 2020 at 11:54 AM Mike Bayer 
> wrote:
>
>
> nevermind, you've managed to find a case that trips it up for the
> connection pool
>
> release today
>
>
>
> On Wed, May 13, 2020, at 11:51 AM, Mike Bayer wrote:
>
>
>
> On Wed, May 13, 2020, at 11:39 AM, Kent Bower wrote:
>
> In this script, conn.close() does *not *call rollback on the
> transaction.  It isn't just a logging issue as I've verified from the
> database that the session was not rolled back.
>
>
> I can confirm that in master only where things have changed
> dramatically.   Should not be the case for any released version, please
> confirm
>
>
>
> On Wed, May 13, 2020 at 11:31 AM Mike Bayer 
> wrote:
>
>
> Haven't looked deeply but so far what you need to know is that
> conn.close() *ALWAYS* rolls back the transaction, just not at the Engine
> level, it's at the connection pool level so you won't see it when logging /
> event hooking on the Engine.  turn on echo_pool and you will see this, in
> modern versions:
>
> 2020-05-13 11:25:45,106 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> being returned to pool
> 2020-05-13 11:25:45,107 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> rollback-on-return, via agent
>
> your DBSession is not going to close the connection because you have it
> bound directly to that connection, rather than to the engine, so it assumes
> it is participating in a larger transaction.   1.4 does amend this behavior
> to be more clear cut as we are doing away with the "nested" behaviors of
> Connection.  So yes I would not be relying upon DBSession.close() as a
> means of transaction control if the session is bound to a connection
> directly.  If the session is bound to a connection I would advise ensuring
> that connection is in a transaction on the outside that you are managing.
>
>
>
>
>
>
>
> On Wed, May 13, 2020, at 10:51 AM, Kent wrote:
>
> Mike, et al.,
>
> I've got some questions about closing connections.  I suspect my framework
> may be at fault, but there is potentially a sqlalchemy issue here as well.
>
> See attached script with nested transaction and explicit
> connection.close().
>
> Things are even more complex because versions have handled this
> differently in the past:
>
>- on rel_0_9_1 and ealier, the conn.close() *always *actually emitted
>a DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested()
>now prevents the DBAPI ROLLBACK call, even though the close() is on the
>connection itself.  I'm not sure if that was an intended change, but it
>seems .close() on a connection should always cause ROLLBACK, no?
>- rel_1_3_9 and earlier this code
>raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as
>it invokes the registered 'rollback' event with an already-closed
>connection, but on current master (1.4.0b1) there is no exception since a
>rollback isn't attempted, leaving the db connection in idle transaction.
>
>
> On all versions since rel_0_9_1, even after both of the script's finally
> clauses (close() statements) but before the program terminates, *the
> transaction is still left in transaction in the database, though the
> connection's been checked back into the pool.*
>
> As far as whether my code here is badly formed, my question is: is it
> wrong to mix session closing and connection closing or should that be fine?
>
> (My actual application is obviously more complex, with zope.sqlalchemy &
> transaction and frameworks; I boiled it down to this script for demo
> purposes and removed those libraries, making this code look weirder.)
>
> Thanks in advance!
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Comple

Re: [sqlalchemy] connection close() questions

2020-05-13 Thread Kent Bower
LOL, you're welcome, I'm such a great tester, aren't I?

Anyway, everything after 0.9.2 behaved this way.  0.9.1 did a rollback.

On Wed, May 13, 2020 at 11:54 AM Mike Bayer 
wrote:

> nevermind, you've managed to find a case that trips it up for the
> connection pool
>
> release today
>
>
>
> On Wed, May 13, 2020, at 11:51 AM, Mike Bayer wrote:
>
>
>
> On Wed, May 13, 2020, at 11:39 AM, Kent Bower wrote:
>
> In this script, conn.close() does *not *call rollback on the
> transaction.  It isn't just a logging issue as I've verified from the
> database that the session was not rolled back.
>
>
> I can confirm that in master only where things have changed
> dramatically.   Should not be the case for any released version, please
> confirm
>
>
>
> On Wed, May 13, 2020 at 11:31 AM Mike Bayer 
> wrote:
>
>
> Haven't looked deeply but so far what you need to know is that
> conn.close() *ALWAYS* rolls back the transaction, just not at the Engine
> level, it's at the connection pool level so you won't see it when logging /
> event hooking on the Engine.  turn on echo_pool and you will see this, in
> modern versions:
>
> 2020-05-13 11:25:45,106 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> being returned to pool
> 2020-05-13 11:25:45,107 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> rollback-on-return, via agent
>
> your DBSession is not going to close the connection because you have it
> bound directly to that connection, rather than to the engine, so it assumes
> it is participating in a larger transaction.   1.4 does amend this behavior
> to be more clear cut as we are doing away with the "nested" behaviors of
> Connection.  So yes I would not be relying upon DBSession.close() as a
> means of transaction control if the session is bound to a connection
> directly.  If the session is bound to a connection I would advise ensuring
> that connection is in a transaction on the outside that you are managing.
>
>
>
>
>
>
>
> On Wed, May 13, 2020, at 10:51 AM, Kent wrote:
>
> Mike, et al.,
>
> I've got some questions about closing connections.  I suspect my framework
> may be at fault, but there is potentially a sqlalchemy issue here as well.
>
> See attached script with nested transaction and explicit
> connection.close().
>
> Things are even more complex because versions have handled this
> differently in the past:
>
>- on rel_0_9_1 and ealier, the conn.close() *always *actually emitted
>a DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested()
>now prevents the DBAPI ROLLBACK call, even though the close() is on the
>connection itself.  I'm not sure if that was an intended change, but it
>seems .close() on a connection should always cause ROLLBACK, no?
>- rel_1_3_9 and earlier this code
>raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as
>it invokes the registered 'rollback' event with an already-closed
>connection, but on current master (1.4.0b1) there is no exception since a
>rollback isn't attempted, leaving the db connection in idle transaction.
>
>
> On all versions since rel_0_9_1, even after both of the script's finally
> clauses (close() statements) but before the program terminates, *the
> transaction is still left in transaction in the database, though the
> connection's been checked back into the pool.*
>
> As far as whether my code here is badly formed, my question is: is it
> wrong to mix session closing and connection closing or should that be fine?
>
> (My actual application is obviously more complex, with zope.sqlalchemy &
> transaction and frameworks; I boiled it down to this script for demo
> purposes and removed those libraries, making this code look weirder.)
>
> Thanks in advance!
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/a9d73e26-84c6-4dcf-bb6d-82f541fa1f6d%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/a9d73e26-84c6-4dcf-bb6d-82f541fa1f6d%40googlegroups.com?utm_medium=email_source=footer>

Re: [sqlalchemy] connection close() questions

2020-05-13 Thread Kent Bower
In this script, conn.close() does *not *call rollback on the transaction.
It isn't just a logging issue as I've verified from the database that the
session was not rolled back.

On Wed, May 13, 2020 at 11:31 AM Mike Bayer 
wrote:

> Haven't looked deeply but so far what you need to know is that
> conn.close() *ALWAYS* rolls back the transaction, just not at the Engine
> level, it's at the connection pool level so you won't see it when logging /
> event hooking on the Engine.  turn on echo_pool and you will see this, in
> modern versions:
>
> 2020-05-13 11:25:45,106 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> being returned to pool
> 2020-05-13 11:25:45,107 DEBUG sqlalchemy.pool.impl.QueuePool Connection
>  dbname=test host=localhost', closed: 0> rollback-on-return, via agent
>
> your DBSession is not going to close the connection because you have it
> bound directly to that connection, rather than to the engine, so it assumes
> it is participating in a larger transaction.   1.4 does amend this behavior
> to be more clear cut as we are doing away with the "nested" behaviors of
> Connection.  So yes I would not be relying upon DBSession.close() as a
> means of transaction control if the session is bound to a connection
> directly.  If the session is bound to a connection I would advise ensuring
> that connection is in a transaction on the outside that you are managing.
>
>
>
>
>
>
>
> On Wed, May 13, 2020, at 10:51 AM, Kent wrote:
>
> Mike, et al.,
>
> I've got some questions about closing connections.  I suspect my framework
> may be at fault, but there is potentially a sqlalchemy issue here as well.
>
> See attached script with nested transaction and explicit
> connection.close().
>
> Things are even more complex because versions have handled this
> differently in the past:
>
>- on rel_0_9_1 and ealier, the conn.close() *always *actually emitted
>a DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested()
>now prevents the DBAPI ROLLBACK call, even though the close() is on the
>connection itself.  I'm not sure if that was an intended change, but it
>seems .close() on a connection should always cause ROLLBACK, no?
>- rel_1_3_9 and earlier this code
>raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as
>it invokes the registered 'rollback' event with an already-closed
>connection, but on current master (1.4.0b1) there is no exception since a
>rollback isn't attempted, leaving the db connection in idle transaction.
>
>
> On all versions since rel_0_9_1, even after both of the script's finally
> clauses (close() statements) but before the program terminates, *the
> transaction is still left in transaction in the database, though the
> connection's been checked back into the pool.*
>
> As far as whether my code here is badly formed, my question is: is it
> wrong to mix session closing and connection closing or should that be fine?
>
> (My actual application is obviously more complex, with zope.sqlalchemy &
> transaction and frameworks; I boiled it down to this script for demo
> purposes and removed those libraries, making this code look weirder.)
>
> Thanks in advance!
> Kent
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/a9d73e26-84c6-4dcf-bb6d-82f541fa1f6d%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/a9d73e26-84c6-4dcf-bb6d-82f541fa1f6d%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
> *Attachments:*
>
>- connection-close.py
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/Lit5HWFiC0U/unsubscribe.
> To unsubscribe from this

[sqlalchemy] connection close() questions

2020-05-13 Thread Kent
Mike, et al.,

I've got some questions about closing connections.  I suspect my framework 
may be at fault, but there is potentially a sqlalchemy issue here as well.

See attached script with nested transaction and explicit connection.close().

Things are even more complex because versions have handled this differently 
in the past:

   - on rel_0_9_1 and ealier, the conn.close() *always *actually emitted a 
   DBAPI ROLLBACK, but on rel_0_9_2+, the previous call to .begin_nested() now 
   prevents the DBAPI ROLLBACK call, even though the close() is on the 
   connection itself.  I'm not sure if that was an intended change, but it 
   seems .close() on a connection should always cause ROLLBACK, no?
   - rel_1_3_9 and earlier this code 
   raises sqlalchemy.exc.ResourceClosedError on the last DBSession.close() as 
   it invokes the registered 'rollback' event with an already-closed 
   connection, but on current master (1.4.0b1) there is no exception since a 
   rollback isn't attempted, leaving the db connection in idle transaction.


On all versions since rel_0_9_1, even after both of the script's finally 
clauses (close() statements) but before the program terminates, *the 
transaction is still left in transaction in the database, though the 
connection's been checked back into the pool.*

As far as whether my code here is badly formed, my question is: is it wrong 
to mix session closing and connection closing or should that be fine?

(My actual application is obviously more complex, with zope.sqlalchemy & 
transaction and frameworks; I boiled it down to this script for demo 
purposes and removed those libraries, making this code look weirder.)

Thanks in advance!
Kent

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a9d73e26-84c6-4dcf-bb6d-82f541fa1f6d%40googlegroups.com.
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import event
from sqlalchemy import __version__

print "\n## sqlalchemy %s\n" % __version__

def do_some_info_reset(connection):
print("## ## do_some_info_reset on %x ## ##" % id(connection))
# access connection:
connection.info

pg_url = 'postgresql://salespylot:salespylot@localhost:5444/salespylottest'

engine = create_engine(pg_url, echo=True)
event.listen(engine, 'rollback', do_some_info_reset)

conn = engine.connect()

maker = sessionmaker(autoflush=True, autocommit=False)
DBSession = scoped_session(maker)
DBSession.configure(bind=conn)

metadata = MetaData(engine)
# map a system postgres table for demo purposes:
table=Table("pg_language", metadata, 
Column("lanname", Unicode(255), primary_key=True))

class Something(object):
pass

mapper(Something, table)

# mimic application layers with some try blocks:
try:
try:
DBSession.begin_nested()
DBSession.query(Something).all()
DBSession.close()
DBSession.query(Something).all()
finally:
# should direct conn.close() do rollback as 0.9.1 an earlier?
conn.close()
finally:
DBSession.close()


[sqlalchemy] Re: recipe advice for special 1:1 relationships

2019-04-04 Thread Kent
Excellent, thanks very much!
 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: recipe advice for special 1:1 relationships

2019-04-04 Thread Kent
Will 

a.b1 = None


issue a delete statement that also contains the WHERE clause to make that 
safe?  (Or, is the delete always by primary key anyway?)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] recipe advice for special 1:1 relationships

2019-04-04 Thread Kent
I've used sqlalchemy for many years and something that has come up now and 
then is the need for adding a relationship to a mapper that normally would 
be a collection (uselist=True) but instead we want to target a specific 
record in that collection.

As a simplified illustration, suppose you have CreditApp and Applicant 
classes mapped.

mapper(CreditApp, creditapp_table,
properties = {
'applicants': relationship(Applicant,
backref='app')
})

That would work fine if you are happy to work with *applicants *as a 
collection.

BUT, in this case we really want 2 very specific 1:1 Applicant 
relationships, the primary Applicant and a secondary (joint-signer) 
Applicant:

We can hack at the primaryjoin:

mapper(CreditApp, creditapp_table,
properties={
'primaryapplicant': relationship(Applicant,
primaryjoin=and_(
creditapp_table.c.id == applicant_table.c.appid,
applicant_table.c.primary == u'Y',  
  # <== THIS IS WHAT WE DON'T WANT
),
foreign_keys=[applicant_table.c.appid],
uselist=False,
backref='app'),
'secondaryapplicant': relationship(Applicant,
primaryjoin=and_(
creditapp_table.c.id == applicant_table.c.appid,
applicant_table.c.primary == u'N',  
  # <== THIS IS WHAT WE DON'T WANT
),
foreign_keys=[applicant_table.c.appid],
uselist=False,
backref='app'),
})

This kind of works, but it is ugly since sqlalchemy doesn't really 
understand what we've done.

For example, if I set

myapp.primaryapplicant = Applicant()

sqlalchemy doesn't really understand the new record should have primary 
flag set to 'Y'

Also:

myapp.primaryapplicant = None

may issue SQL that deletes both applicants if I recall.


What is a better recipe for this?  Would association proxies help?  Would 
polymorphic inheritance work this out properly (single table inheritance)?

Please let me know.  Thanks!

Kent

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Kent
Never mind that last silly 0.7 question.  (Your patch is compatible it 
seems.)  (And pretend that you didn't hear any mention of 0.7)

Thanks again for your awesome software!
Kent


On Wednesday, September 13, 2017 at 3:42:55 PM UTC-4, Kent wrote:
>
>
>> dude!it is 2017.   get on this client!   :)   I literally have to 
>> maintain this feature for you personally :). 
>>
>>
>>
> Hahaha!  If you could only feel my pain!  Unfortunately, there are 
> multiple clients still on 8i because they are stuck on it due to their 
> legacy application (which we are working hard to replace and get them on 
> PostgreSQL.)  (So, at least you know the 8i stuff is helping multiple 
> companies :)
>  
>
>> > 
>> > The cleanest approach is specifying that the 'primaryjoin' to the 
>> > relationship in the mapper should include an extra join clause.  I hate 
>> > doing this, but after many other approaches, I've found this is by far 
>> the 
>> > cleanest approach due to bad database design (which I can't control -- 
>> > legacy). 
>> > 
>> > Anyway, the attached script shows an simplified, analogous mock-up, 
>> which 
>> > works correctly when joins are ANSI and incorrectly with 
>> use_ansi=False. 
>> > 
>> > The script demonstrates an inconsistency in use_ansi True vs. False on 
>> > sqlalchemy version 1.1.14 (although my sqlalchemy is older). 
>> > 
>> > In the use_ansi=False SQL, the correct "fix" would be changing the 
>> rendered: 
>> > 
>> > AND bugs_1.deathdate IS NULL 
>> > 
>> > into 
>> > 
>> > AND bugs_1.deathdate(+) IS NULL 
>> > 
>> > This then matches the ANSI join and works on 8i (I've tested it). 
>> > 
>> > Is this something we can fix?  Since the column is on the remote table 
>> and 
>> > specified in the join condition, it really needs "(+)" after the column 
>> name 
>> > in SQL.  This accomplishes the same thing as the ANSI version placing 
>> this 
>> > join condition in the "ON ..." clause instead of the "WHERE". 
>> > 
>> > Alternatively, is there a hack I could use to fix the rendered SQL on 
>> > joinedloads for this particular relationship? 
>>
>> the miracle of Docker means that I now have easy to run Oracle, SQL 
>> Server, etc. databases anywhere I need them so I can quickly confirm 
>> that this works with ansi or not: 
>>
>> mapper(Rock, rocks_table, 
>> properties={ 
>> 'livingbugs': relationship(Bug, 
>> primaryjoin=and_( 
>> bugs_table.c.rockid == rocks_table.c.id, 
>> bugs_table.c.deathdate.op("(+)=")(null()), 
>> )), 
>> }) 
>>
>>
> I'll find a shield to hide behind and then dare to ask "Is there a way to 
> hack that fix on SqlAlchemy 0.7?"
>
> Thanks for looking at this!
> Kent
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Kent

>
>
> dude!it is 2017.   get on this client!   :)   I literally have to 
> maintain this feature for you personally :). 
>
>
>
Hahaha!  If you could only feel my pain!  Unfortunately, there are multiple 
clients still on 8i because they are stuck on it due to their legacy 
application (which we are working hard to replace and get them on 
PostgreSQL.)  (So, at least you know the 8i stuff is helping multiple 
companies :)
 

> > 
> > The cleanest approach is specifying that the 'primaryjoin' to the 
> > relationship in the mapper should include an extra join clause.  I hate 
> > doing this, but after many other approaches, I've found this is by far 
> the 
> > cleanest approach due to bad database design (which I can't control -- 
> > legacy). 
> > 
> > Anyway, the attached script shows an simplified, analogous mock-up, 
> which 
> > works correctly when joins are ANSI and incorrectly with use_ansi=False. 
> > 
> > The script demonstrates an inconsistency in use_ansi True vs. False on 
> > sqlalchemy version 1.1.14 (although my sqlalchemy is older). 
> > 
> > In the use_ansi=False SQL, the correct "fix" would be changing the 
> rendered: 
> > 
> > AND bugs_1.deathdate IS NULL 
> > 
> > into 
> > 
> > AND bugs_1.deathdate(+) IS NULL 
> > 
> > This then matches the ANSI join and works on 8i (I've tested it). 
> > 
> > Is this something we can fix?  Since the column is on the remote table 
> and 
> > specified in the join condition, it really needs "(+)" after the column 
> name 
> > in SQL.  This accomplishes the same thing as the ANSI version placing 
> this 
> > join condition in the "ON ..." clause instead of the "WHERE". 
> > 
> > Alternatively, is there a hack I could use to fix the rendered SQL on 
> > joinedloads for this particular relationship? 
>
> the miracle of Docker means that I now have easy to run Oracle, SQL 
> Server, etc. databases anywhere I need them so I can quickly confirm 
> that this works with ansi or not: 
>
> mapper(Rock, rocks_table, 
> properties={ 
> 'livingbugs': relationship(Bug, 
> primaryjoin=and_( 
>     bugs_table.c.rockid == rocks_table.c.id, 
> bugs_table.c.deathdate.op("(+)=")(null()), 
> )), 
> }) 
>
>
I'll find a shield to hide behind and then dare to ask "Is there a way to 
hack that fix on SqlAlchemy 0.7?"

Thanks for looking at this!
Kent

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Kent
I've got a strange relationship on a legacy Oracle 8i database which I need 
to support (whether I like it or not).

The cleanest approach is specifying that the 'primaryjoin' to the 
relationship in the mapper should include an extra join clause.  I hate 
doing this, but after many other approaches, I've found this is by far the 
cleanest approach due to bad database design (which I can't control -- 
legacy).

Anyway, the attached script shows an simplified, analogous mock-up, which 
works *correctly* when joins are ANSI and *incorrectly* with use_ansi=False.

The script demonstrates an inconsistency in use_ansi True vs. False on 
sqlalchemy version 1.1.14 (although my sqlalchemy is older). 

In the use_ansi=False SQL, the correct "fix" would be changing the rendered:

AND bugs_1.deathdate IS NULL

into

AND bugs_1.deathdate(+) IS NULL

This then matches the ANSI join and works on 8i (I've tested it).

Is this something we can fix?  Since the column is on the* remote table *and 
*specified in the join condition*, it really needs "(+)" after the column 
name in SQL.  This accomplishes the same thing as the ANSI version placing 
this join condition in the "ON ..." clause instead of the "WHERE".

Alternatively, is there a hack I could use to fix the rendered SQL on 
joinedloads for this particular relationship?

Thanks very much in advance!
Kent

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import *
from sqlalchemy.orm import *
from datetime import date


# if use_ansi=True, this script succeeds
# if False, this script fails

use_ansi = False
#use_ansi = True

engine = create_engine('oracle://kent:kent@localhost:1521/xe', use_ansi=use_ansi, echo=True) 

metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

# a rock has many bugs

rocks_table = Table("rocks", metadata,
Column("id", Integer, primary_key=True),
)

bugs_table = Table("bugs", metadata,
Column("id", Integer, primary_key=True),
Column("rockid", Integer, ForeignKey('rocks.id')),
Column("deathdate", Date),
)

class Rock(object):
pass

class Bug(object):
pass

mapper(Rock, rocks_table, 
properties={
'livingbugs': relationship(Bug,
primaryjoin=and_(
bugs_table.c.rockid == rocks_table.c.id,
bugs_table.c.deathdate == None,
)),
})

mapper(Bug, bugs_table)

metadata.create_all()
try:
s = Session()

r=Rock()
r.id = 55

b=Bug()
b.id = 1
b.rockid = 55
b.deathdate = date.today()

s.add(r)
s.add(b)
s.commit()

s = Session()

rocks = s.query(Rock).options(joinedload('livingbugs')).all()
if not rocks:
# When not using ANSI, if 
# AND bugs_1.deathdate IS NULL
# is changed to:
# AND bugs_1.deathdate(+) IS NULL
# then the join is consistent with ANSI join and doesn't fail
raise Exception("Rock not selected")

finally:
metadata.drop_all()




Re: [sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent Bower
Thanks very much!

On Wed, May 10, 2017 at 2:24 PM, mike bayer <mike...@zzzcomputing.com>
wrote:

> this is all patched in 1.2, your original test works too.
>
> The fix here is a little too intricate for 1.1 right now as this is a very
> long-standing bug(goes back to 0.7 at least and probably further) and
> 1.1 is getting near maintenance mode.
>
>
>
> On 05/10/2017 01:48 PM, mike bayer wrote:
>
>> nevermind, the issue is at
>>
>> https://bitbucket.org/zzzeek/sqlalchemy/issues/3984/deferred
>> -column_property-gets-set-to
>>
>> the fix is not as obvious as that, that particular check is assuming a
>> column_property() where its value was never present in __dict__ in the
>> first place, so it needs to be marked "expired".
>>
>>
>> On 05/10/2017 01:38 PM, Kent wrote:
>>
>>> The regular columns seem to expire and reload properly without issue.
>>>  (Is that what you're asking?)
>>>
>>> You want me to submit a PR changing:
>>>
>>> if p.expire_on_flush or p.key*not*in state.dict
>>>
>>> to
>>>
>>> if p.expire_on_flush*and*p.key in state.dict*
>>> *
>>>
>>> ?
>>>
>>> (If so, which branch?)
>>>
>>>
>>> On Wednesday, May 10, 2017 at 12:55:45 PM UTC-4, Mike Bayer wrote:
>>>
>>> so you can confirm this is only for custom SQL + column_property(),
>>> not
>>>     a regular column right?  definitely a bug for 1.2 if you can post it
>>> up
>>>
>>>
>>> On 05/10/2017 12:37 PM, Kent wrote:
>>>  > I'm thinking that should be
>>>  > *"if p.expire_on_flush and p.key in state.dict"*
>>>  >
>>>  >
>>>  > On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote:
>>>  >
>>>  > deferred column_properties may be less-efficient subquery
>>> selects
>>>  > (and thus marked deferred).  When a flush occurs that updates
>>> an
>>>  > object, any read-only column_properties are marked as
>>> expired, even
>>>  > if they weren't even loaded.  This means if the object needs
>>> to be
>>>  > refreshed, all these deferred column properties are loaded.
>>>  >
>>>  > We probably want the behavior to only expire read-only
>>> attributes
>>>  > that were actually loaded, right?
>>>  >
>>>  > See attached script.  This behavior is as of 1.1.1
>>>  >
>>>  > Thoughts?
>>>  >
>>>  >
>>>  > --
>>>  > SQLAlchemy -
>>>  > The Python SQL Toolkit and Object Relational Mapper
>>>  >
>>>  > http://www.sqlalchemy.org/
>>>  >
>>>  > To post example code, please provide an MCVE: Minimal, Complete,
>>> and
>>>  > Verifiable Example. See http://stackoverflow.com/help/mcve
>>> <http://stackoverflow.com/help/mcve> for a full
>>>  > description.
>>>  > ---
>>>  > You received this message because you are subscribed to the Google
>>>  > Groups "sqlalchemy" group.
>>>  > To unsubscribe from this group and stop receiving emails from it,
>>> send
>>>  > an email to sqlalchemy+...@googlegroups.com 
>>>  > <mailto:sqlalchemy+unsubscr...@googlegroups.com >.
>>>  > To post to this group, send email to sqlal...@googlegroups.com
>>> 
>>>  > <mailto:sqlal...@googlegroups.com >.
>>>  > Visit this group at https://groups.google.com/group/sqlalchemy
>>> <https://groups.google.com/group/sqlalchemy>.
>>>  > For more options, visit https://groups.google.com/d/optout
>>> <https://groups.google.com/d/optout>.
>>>
>>> --
>>> SQLAlchemy -
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>>> description.
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving ema

Re: [sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent Bower
If never present in __dict__, why does it need to be marked as expired
after an insert or update?  If not in __dict__ and referenced, isn't won't
it load as whether or not it is marked as expired?


On Wed, May 10, 2017 at 1:48 PM, mike bayer <mike...@zzzcomputing.com>
wrote:

> nevermind, the issue is at
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3984/deferred
> -column_property-gets-set-to
>
> the fix is not as obvious as that, that particular check is assuming a
> column_property() where its value was never present in __dict__ in the
> first place, so it needs to be marked "expired".
>
>
> On 05/10/2017 01:38 PM, Kent wrote:
>
>> The regular columns seem to expire and reload properly without issue.
>>  (Is that what you're asking?)
>>
>> You want me to submit a PR changing:
>>
>> if p.expire_on_flush or p.key*not*in state.dict
>>
>> to
>>
>> if p.expire_on_flush*and*p.key in state.dict*
>> *
>>
>>
>> ?
>>
>> (If so, which branch?)
>>
>>
>> On Wednesday, May 10, 2017 at 12:55:45 PM UTC-4, Mike Bayer wrote:
>>
>> so you can confirm this is only for custom SQL + column_property(),
>> not
>> a regular column right?  definitely a bug for 1.2 if you can post it
>> up
>>
>>
>>     On 05/10/2017 12:37 PM, Kent wrote:
>>  > I'm thinking that should be
>>  > *"if p.expire_on_flush and p.key in state.dict"*
>>  >
>>  >
>>  > On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote:
>>  >
>>  > deferred column_properties may be less-efficient subquery
>> selects
>>  > (and thus marked deferred).  When a flush occurs that updates
>> an
>>  > object, any read-only column_properties are marked as
>> expired, even
>>  > if they weren't even loaded.  This means if the object needs
>> to be
>>  > refreshed, all these deferred column properties are loaded.
>>  >
>>  > We probably want the behavior to only expire read-only
>> attributes
>>  > that were actually loaded, right?
>>  >
>>  > See attached script.  This behavior is as of 1.1.1
>>  >
>>  > Thoughts?
>>  >
>>  >
>>  > --
>>  > SQLAlchemy -
>>  > The Python SQL Toolkit and Object Relational Mapper
>>  >
>>  > http://www.sqlalchemy.org/
>>  >
>>  > To post example code, please provide an MCVE: Minimal, Complete,
>> and
>>  > Verifiable Example. See http://stackoverflow.com/help/mcve
>> <http://stackoverflow.com/help/mcve> for a full
>>  > description.
>>  > ---
>>  > You received this message because you are subscribed to the Google
>>  > Groups "sqlalchemy" group.
>>  > To unsubscribe from this group and stop receiving emails from it,
>> send
>>  > an email to sqlalchemy+...@googlegroups.com 
>>  > <mailto:sqlalchemy+unsubscr...@googlegroups.com >.
>>  > To post to this group, send email to sqlal...@googlegroups.com
>> 
>>  > <mailto:sqlal...@googlegroups.com >.
>>  > Visit this group at https://groups.google.com/group/sqlalchemy
>> <https://groups.google.com/group/sqlalchemy>.
>>  > For more options, visit https://groups.google.com/d/optout
>> <https://groups.google.com/d/optout>.
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com > sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com > sqlalchemy@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide

Re: [sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent
The regular columns seem to expire and reload properly without issue.  (Is 
that what you're asking?)

You want me to submit a PR changing:

if p.expire_on_flush or p.key *not *in state.dict

to 

if p.expire_on_flush *and *p.key in state.dict

?

(If so, which branch?)


On Wednesday, May 10, 2017 at 12:55:45 PM UTC-4, Mike Bayer wrote:
>
> so you can confirm this is only for custom SQL + column_property(), not 
> a regular column right?  definitely a bug for 1.2 if you can post it up 
>
>
> On 05/10/2017 12:37 PM, Kent wrote: 
> > I'm thinking that should be 
> > *"if p.expire_on_flush and p.key in state.dict"* 
> > 
> > 
> > On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote: 
> > 
> > deferred column_properties may be less-efficient subquery selects 
> > (and thus marked deferred).  When a flush occurs that updates an 
> > object, any read-only column_properties are marked as expired, even 
> > if they weren't even loaded.  This means if the object needs to be 
> > refreshed, all these deferred column properties are loaded. 
> > 
> > We probably want the behavior to only expire read-only attributes 
> > that were actually loaded, right? 
> > 
> > See attached script.  This behavior is as of 1.1.1 
> > 
> > Thoughts? 
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com >. 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > <mailto:sqlal...@googlegroups.com >. 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent Bower
The regular columns seem to expire and reload properly without issue.  (Is
that what you're asking?)

You want me to submit a PR changing:

if p.expire_on_flush or p.key *not *in state.dict

to

if p.expire_on_flush *and *p.key in state.dict

?

On Wed, May 10, 2017 at 12:55 PM, mike bayer <mike...@zzzcomputing.com>
wrote:

> so you can confirm this is only for custom SQL + column_property(), not a
> regular column right?  definitely a bug for 1.2 if you can post it up
>
>
> On 05/10/2017 12:37 PM, Kent wrote:
>
>> I'm thinking that should be
>> *"if p.expire_on_flush and p.key in state.dict"*
>>
>>
>> On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote:
>>
>> deferred column_properties may be less-efficient subquery selects
>> (and thus marked deferred).  When a flush occurs that updates an
>> object, any read-only column_properties are marked as expired, even
>> if they weren't even loaded.  This means if the object needs to be
>> refreshed, all these deferred column properties are loaded.
>>
>> We probably want the behavior to only expire read-only attributes
>> that were actually loaded, right?
>>
>> See attached script.  This behavior is as of 1.1.1
>>
>> Thoughts?
>>
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com > sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com > sqlalchemy@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> --- You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/g0QVQ6RbxNU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent
I'm thinking that should be 
*"if p.expire_on_flush and p.key in state.dict"*


On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote:
>
> deferred column_properties may be less-efficient subquery selects (and 
> thus marked deferred).  When a flush occurs that updates an object, any 
> read-only column_properties are marked as expired, even if they weren't 
> even loaded.  This means if the object needs to be refreshed, all these 
> deferred column properties are loaded.  
>
> We probably want the behavior to only expire read-only attributes that 
> were actually loaded, right?
>
> See attached script.  This behavior is as of 1.1.1
>
> Thoughts?
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: deferred column_properties should probably not be expired unless they were already loaded

2017-05-10 Thread Kent
Another question surrounding this:

in persistence.py:

def _finalize_insert_update_commands(...)

if mapper._readonly_props:
readonly = state.unmodified_intersection(
[p.key for p in mapper._readonly_props
if p.expire_on_flush or p.key not in state.dict]
)
if readonly:
state._expire_attributes(state.dict, readonly)


I'm confused by "or p.key *not *in state.dict"... wouldn't we want to 
expire the ones that *are *in state.dict?

Wouldn't we want   

"or p.key in state.dict"

?

Just wanted someone to look that code over again.


On Wednesday, May 10, 2017 at 11:35:30 AM UTC-4, Kent wrote:
>
> deferred column_properties may be less-efficient subquery selects (and 
> thus marked deferred).  When a flush occurs that updates an object, any 
> read-only column_properties are marked as expired, even if they weren't 
> even loaded.  This means if the object needs to be refreshed, all these 
> deferred column properties are loaded.  
>
> We probably want the behavior to only expire read-only attributes that 
> were actually loaded, right?
>
> See attached script.  This behavior is as of 1.1.1
>
> Thoughts?
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Filters on specific tables of concrete polymorphic union

2017-04-13 Thread Kent
Awesome!

I like the second approach better for the exact same reasons.  

Thanks so much!
Kent



On Thursday, April 13, 2017 at 1:50:40 PM UTC-4, Mike Bayer wrote:
>
>
> it has nothing to do with joined table inheritance, in your example, 
> your base mapper is already mapped to "preferences_union", so if you 
> provide an alternative selectable that has no relationship to that, it 
> does not see any of the required columns being provided.   it's just 
> like if your PreferencesBase were mapped to a view in the database, it 
> would have no idea about the tables represented in that view.  so while 
> it renders your new polymorphic union, it also renders the old one 
> because it still needs to load from preferences_union.c.preferenceid, 
> preferences_union.c.value, etc. which are not being substituted. 
>
> there's not a facility right now that can automatically handle the 
> scenario of, given: 
>
>
> SELECT a, b, c FROM ( 
> select a, b, c FROM table1 
> UNION 
> select a, b, c FROM table2 
> ) AS p_alias_1 
>
> that we want to make a whole new expression out of table1/table2: 
>
>
> SELECT a, b, c FROM ( 
> select a, b, c FROM table1 WHERE b=1 
> UNION 
> select a, b, c FROM table2 WHERE c=2 
> ) AS p_alias_2 
>
>
> and then "collide" the second into the first, such that we can figure 
> out that when our mapping wants p_alias_1.a, it can get that now from 
> p_alias_2.a, because this requires understanding the semantics of the 
> query.   clause adaptation usually looks for the target columns you're 
> trying to adapt from in the target selectable, rather than trying to 
> match on a general "seems to link to the same common columns" as that 
> produces a lot of ambiguous cases. 
>
> *unless*, you adapt on the string name of the outer columns rather than 
> trying to link them up semantically.  In this case, you as the user are 
> telling the system that you've already done the work of ensuring your 
> new selectable links to the first one the way you want, and you've lined 
> up the outermost column names as the means of doing this. 
>
> There is an "adapt_on_names" feature that does this, which is provided 
> as part of aliased(), where it matches p_alias_1.a to p_alias_2.a using 
> the string name "a".   adapt_on_names is accepted by aliased(), but 
> currently not with_polymorphic() (this can be added, send a PR).   So we 
> can build up w/ the AliasedClass directly: 
>
> from sqlalchemy.orm.util import AliasedClass 
>
> a = AliasedClass( 
>  PreferenceBase, 
>  u, 
>  with_polymorphic_mappers=[ 
>  inspect(GlobalPreference), 
>  inspect(SitePreference), 
>  inspect(UserPreference) 
>  ], 
>  with_polymorphic_discriminator=u.c.type, 
>  adapt_on_names=True 
> ) 
>
> so that's one way. 
>
> next approach, which I think is neater, is to do sort of what I 
> suggested but do it by adapting your original polymorphic, so that it 
> *does* line up.  In the view metaphor, this means you're using the view 
> but swapping out the insides.  this looks like this: 
>
> def apply_polymorphic_criteria(orig, target_table, criteria): 
>  from sqlalchemy.sql import visitors 
>
>  def provide_new_select(element): 
>  if target_table in element.froms: 
>  element.append_whereclause(criteria) 
>
>  return visitors.cloned_traverse( 
>  orig, 
>  {}, 
>  {"select": provide_new_select} 
>  ) 
>
> u = apply_polymorphic_criteria( 
>  preferences_union, 
>  userpreferences_table, 
>  userpreferences_table.c.username == 'kb' 
> ) 
> u = apply_polymorphic_criteria( 
>  u, 
>  sitepreferences_table, 
>  sitepreferences_table.c.siteid == '00' 
> ) 
> a = with_polymorphic(PreferenceBase, '*', selectable=u, 
> polymorphic_on=u.c.type) 
>
>
> why I like that is because you don't have to build up the whole 
> polymorphic_union all over again.   also the reliance on names (which 
> always makes me nervous) is not needed.  I like that approach a lot 
> better but there's a little more rocket science involved. 
>
> also, I can't rememeber the last time I gave someone a neat little 
> recipe to do something unusual and it immediately fails in four other 
> ways and then I just have to write them a new feature, so, there's that, 
> try out both of these. 
>
>
>
>
>
>
>
>
>
> On 04/13/2017 12:39 PM, Kent wrote: 
> > That was the first route I tried.  with_polymorphic() seems to cater to 
> > or assume joined table inheritance.  When I pass a selectable, it 

Re: [sqlalchemy] Filters on specific tables of concrete polymorphic union

2017-04-13 Thread Kent
That was the first route I tried.  with_polymorphic() seems to cater to or 
assume joined table inheritance.  When I pass a selectable, it always ends 
up *joining *my base to that selectable instead of *using only my 
selectable*.

My problem might be that I'm trying to take advantage of 
polymorphic_union() to render my union with all the CAST(NULL AS ...), but 
it does exactly what I need for building a select on the fly.

Attached script is my failing attempt.

Is there a recipe or example using concrete inheritance and 
with_polymorphic?


On Thursday, April 13, 2017 at 10:43:15 AM UTC-4, Mike Bayer wrote:
>
>
>
> On 04/13/2017 10:24 AM, Kent wrote: 
> > Suppose we have the documentation's example of *Concrete Table 
> > Inheritance, *where 
> > 
> > session.query(Employee).all() 
> > 
> > 
> > produces this: 
> > 
> > SELECT pjoin.type AS pjoin_type, 
> > pjoin.manager_data AS pjoin_manager_data, 
> > pjoin.employee_id AS pjoin_employee_id, 
> > pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info 
> > FROM ( 
> > SELECT employees.employee_id AS employee_id, 
> > CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS 
> name, 
> > CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type 
> > FROM employees 
> > UNION ALL 
> > SELECT managers.employee_id AS employee_id, 
> > managers.manager_data AS manager_data, managers.name AS name, 
> > CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type 
> > FROM managers 
> > UNION ALL 
> > SELECT engineers.employee_id AS employee_id, 
> > CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS 
> name, 
> > engineers.engineer_info AS engineer_info, 'engineer' AS type 
> > FROM engineers 
> > ) AS pjoin 
> > 
> > 
> > Suppose we want to*filter certain managers*, which we can do with: 
> > 
> > session.query(Employee)\ 
> > 
> > .filter(or_( 
> > 
> > Employee.manager_data == u'whatineed', 
> > 
> > Employee.manager_data == None))\ 
> > 
> > .all() 
> > 
> > 
> > If manager_data is indexed, many databases can no longer use this index. 
> >  What we really want is: 
> > 
> > SELECT pjoin.type AS pjoin_type, 
> > pjoin.manager_data AS pjoin_manager_data, 
> > pjoin.employee_id AS pjoin_employee_id, 
> > pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info 
> > FROM ( 
> > SELECT employees.employee_id AS employee_id, 
> > CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS 
> name, 
> > CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type 
> > FROM employees 
> > UNION ALL 
> > SELECT managers.employee_id AS employee_id, 
> > managers.manager_data AS manager_data, managers.name AS name, 
> > CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type 
> > FROM managers 
> > 
> > *WHERE manager_data = 'whatineed'* 
> > UNION ALL 
> > SELECT engineers.employee_id AS employee_id, 
> > CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS 
> name, 
> > engineers.engineer_info AS engineer_info, 'engineer' AS type 
> > FROM engineers 
> > ) AS pjoin 
> > 
> > 
> > Is there a way to accomplish this? 
>
>
> Certainly, construct the complete UNION query that you want, most likely 
> using Core select() and union(),  and supply it to Query using 
> with_polymorphic; see the "custom selectable" example in 
>
> http://docs.sqlalchemy.org/en/latest/orm/inheritance.html#basic-control-of-which-tables-are-queried.
>  
>
>
> Automating this process, perhaps you could compose some enhanced version 
> of the polymorhic_union() feature that accepts additional criteria. 
>
> However, I would suggest that if the "manager_data is NULL" part is what 
> screws up the index, you might want to query like this instead: 
>
> session.query(Employee)\ 
> .filter(or_(Employee.manager_data == u'whatineed', 
>   pjoin.c.type != 'manager')).all() 
>
>
> > 
> > Thanks in advance, 
> > Kent 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this messag

[sqlalchemy] Filters on specific tables of concrete polymorphic union

2017-04-13 Thread Kent
Suppose we have the documentation's example of *Concrete Table Inheritance, 
*where

session.query(Employee).all()


produces this:

SELECT pjoin.type AS pjoin_type,
pjoin.manager_data AS pjoin_manager_data,
pjoin.employee_id AS pjoin_employee_id,
pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info
FROM (
SELECT employees.employee_id AS employee_id,
CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name,
CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type
FROM employees
UNION ALL
SELECT managers.employee_id AS employee_id,
managers.manager_data AS manager_data, managers.name AS name,
CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type
FROM managers
UNION ALL
SELECT engineers.employee_id AS employee_id,
CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name,
engineers.engineer_info AS engineer_info, 'engineer' AS type
FROM engineers
) AS pjoin


Suppose we want to* filter certain managers*, which we can do with:

session.query(Employee)\

.filter(or_(

Employee.manager_data == u'whatineed',

Employee.manager_data == None))\

.all()


If manager_data is indexed, many databases can no longer use this index. 
 What we really want is:

SELECT pjoin.type AS pjoin_type,
pjoin.manager_data AS pjoin_manager_data,
pjoin.employee_id AS pjoin_employee_id,
pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info
FROM (
SELECT employees.employee_id AS employee_id,
CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name,
CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type
FROM employees
UNION ALL
SELECT managers.employee_id AS employee_id,
managers.manager_data AS manager_data, managers.name AS name,
CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type
FROM managers

*WHERE manager_data = 'whatineed'*
UNION ALL
SELECT engineers.employee_id AS employee_id,
CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name,
engineers.engineer_info AS engineer_info, 'engineer' AS type
FROM engineers
) AS pjoin


Is there a way to accomplish this?

Thanks in advance,
Kent

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] inspect a scalar relationship property when it is loaded

2016-10-28 Thread Kent Bower
The load() event would also capture if it were subqueryload()ed, right?

And overriding Query.__iter__ wouldn't catch the cases where the scalar is
a "use_get" lookup that was already in session.identity_map...  No good way
to intercept attributes.set_committed() for that, is there?

On Fri, Oct 28, 2016 at 11:21 AM, mike bayer <mike...@zzzcomputing.com>
wrote:

>
>
> On 10/28/2016 10:48 AM, Kent wrote:
>
>> @validates and 'set' attribute events will only fire when the /user/
>> sets a property (setattr), not when initially loaded by the orm.
>>
>> Is there a way to intercept (for inspection) a scalar relationship
>> property instance when it is loaded?  I don't think the 'load' event
>> will work because I won't know if the current instance is being loaded
>> in the context of setting an instrumented attribute.
>>
>> For example:
>>
>> If I have a parent object, I want to know when parent.childobj is
>> populated by the orm framework (whether on a lazy load or joined load).
>>
>> Is that possible?
>>
>> Thanks in advance!
>>
>
> based on the laws of twos, being that any random issue that has never been
> asked ever for ten years will suddenly be asked for twice within two days,
> the answer is no!   the event proposed would be collection_populate, at the
> same time we'd add scalar_populate, see https://groups.google.com/d/ms
> g/sqlalchemy/IfuUwUgKwW0/5ZgPmpAvBAAJ .
>
> Interim solution is to subclass Query and override __iter__.   The load()
> event will get you eagerly-loaded attributes though, set it on "parent"'s
> class and then inspect "childobj".
>
>
>
>
>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to sqlalchemy+unsubscr...@googlegroups.com
>> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> <mailto:sqlalchemy@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> --- You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/5cB4T37sWOk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] inspect a scalar relationship property when it is loaded

2016-10-28 Thread Kent
@validates and 'set' attribute events will only fire when the *user* sets a 
property (setattr), not when initially loaded by the orm.

Is there a way to intercept (for inspection) a scalar relationship property 
instance when it is loaded?  I don't think the 'load' event will work 
because I won't know if the current instance is being loaded in the context 
of setting an instrumented attribute.

For example:

If I have a parent object, I want to know when parent.childobj is populated 
by the orm framework (whether on a lazy load or joined load).

Is that possible?

Thanks in advance!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] suppress echo of INSERT/UPDATE large binary data

2016-08-18 Thread Kent Bower
I attempted to search for such an enhancement but obviously failed to find
it.   Patching the changeset in was fairly straightforward.

Thanks very much Jonathan and Mike!!



On Wed, Aug 17, 2016 at 5:41 PM, Mike Bayer <mike...@zzzcomputing.com>
wrote:

>
>
> On 08/17/2016 01:25 PM, Kent wrote:
>
>> Generally, echo=True for log level of INFO is very helpful for log files.
>>
>> But on INSERT/UPDATE of a Binary column (at least with Oracle, BLOB) it
>> sure would be nice to skip the logging of the sometimes massive binary
>> data.
>>
>> Is this possible?
>>
>
> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.
> html#large-parameter-and-row-values-are-now-truncated-in-log
> ging-and-exception-displays
>
> until you can upgrade write a logging output filter...
>
>
>
>
>> Thanks,
>> Kent
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to sqlalchemy+unsubscr...@googlegroups.com
>> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> <mailto:sqlalchemy@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/gA3Q4DQWLKw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] suppress echo of INSERT/UPDATE large binary data

2016-08-17 Thread Kent
Generally, echo=True for log level of INFO is very helpful for log files.

But on INSERT/UPDATE of a Binary column (at least with Oracle, BLOB) it 
sure would be nice to skip the logging of the sometimes massive binary data.

Is this possible?

Thanks,
Kent

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Help with a custom "seconds_interval()" construct

2016-05-31 Thread Kent Bower
Thanks Mike.

I'm not sure FunctionElement is most appropriate or if Interval() is the
best "type," but for future readers, as a recipe, this seems to work:

class seconds_interval(FunctionElement):
type = Interval()
name = 'seconds'

def __init__(self, *args, **kwargs):
FunctionElement.__init__(self, *args, **kwargs)
self.seconds = args[0]
@compiles(seconds_interval)
def compile_seconds_interval(element, compiler, **kw):
if compiler.dialect.name == 'oracle' and \
   compiler.dialect.server_version_info < (9,):
return "%s/86400" % compiler.process(element.clauses)
else:
# use timedelta as bind param
td = timedelta(seconds=element.seconds)
return compiler.process(bindparam('seconds', value=td, unique=True))



On Sun, May 29, 2016 at 8:25 PM, Mike Bayer <mike...@zzzcomputing.com>
wrote:

>
>
> On 05/28/2016 09:44 AM, Kent wrote:
>
>> I'm interested in being able to use second time intervals on PostgreSQL,
>> Oracle 8 /and /modern Oracle versions, agnostically.
>>
>> The native python timedelta works great for the postgres and cx_Oracle
>> drivers.  However /cx_Oracle connected to Oracle 8 won't support this/
>> so I'm building my own construct to help.
>>
>> We want support for something like this:
>>
>> /current_timestamp() + timedelta(seconds=15)/
>>
>> (Assume "current_timestamp()" works on all these databases -- I've
>> already built that.)
>>
>> *PostgreSQL*, works great with python's timedelta:
>>
>>> DBSession.execute(select([current_timestamp() +
>>>>>
>>>> timedelta(seconds=15)])).scalar()
>> MainThread: 09:15:20,871 INFO  [sqlalchemy.engine.base.Engine] BEGIN
>> (implicit)
>> MainThread: 09:15:20,872 INFO  [sqlalchemy.engine.base.Engine] SELECT
>> localtimestamp + %(param_1)s AS anon_1
>> MainThread: 09:15:20,872 INFO  [sqlalchemy.engine.base.Engine]
>> {'param_1': datetime.timedelta(0, 15)}
>> datetime.datetime(2016, 5, 28, 9, 15, 35, 872999)
>>
>> *Modern Oracle*, works great with python's timedelta:
>>
>>> DBSession.execute(select([current_timestamp() +
>>>>>
>>>> timedelta(seconds=15)])).scalar()
>> MainThread: 09:28:15,009 INFO  [sqlalchemy.engine.base.Engine] BEGIN
>> (implicit)
>> MainThread: 09:28:15,010 INFO  [sqlalchemy.engine.base.Engine] SELECT
>> systimestamp + :param_1 AS anon_1 FROM DUAL
>> MainThread: 09:28:15,010 INFO  [sqlalchemy.engine.base.Engine]
>> {'param_1': datetime.timedelta(0, 15)}
>> datetime.datetime(2016, 5, 28, 9, 28, 30, 11530)
>>
>> *Oracle 8: *no
>>
>> DBSession.execute(select([current_timestamp() +
>>>>>
>>>> timedelta(seconds=15)])).scalar()
>> MainThread: 08:29:37,659 INFO  [sqlalchemy.engine.base.Engine] SELECT
>> sysdate + :param_1 AS anon_1 FROM DUAL
>> MainThread: 08:29:37,659 INFO  [sqlalchemy.engine.base.Engine]
>> {'param_1': datetime.datetime(1970, 1, 1, 0, 0, 15)}
>> Traceback (most recent call last):
>>   File "", line 1, in 
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/orm/scoping.py",
>> line 114, in do
>> return getattr(self.registry(), name)(*args, **kwargs)
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py",
>> line 804, in execute
>> clause, params or {})
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
>> line 1450, in execute
>> params)
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
>> line 1583, in _execute_clauseelement
>> compiled_sql, distilled_params
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
>> line 1697, in _execute_context
>> context)
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
>> line 1690, in _execute_context
>> context)
>>   File
>>
>> "/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py",
>> line 335, in do_execute
>> cursor.execute(sta

[sqlalchemy] Help with a custom "seconds_interval()" construct

2016-05-28 Thread Kent
I'm interested in being able to use second time intervals on PostgreSQL, 
Oracle 8 *and *modern Oracle versions, agnostically.

The native python timedelta works great for the postgres and cx_Oracle 
drivers.  However *cx_Oracle connected to Oracle 8 won't support this* so 
I'm building my own construct to help.

We want support for something like this:

*current_timestamp() + timedelta(seconds=15)*

(Assume "current_timestamp()" works on all these databases -- I've already 
built that.)

*PostgreSQL*, works great with python's timedelta:
>>> DBSession.execute(select([current_timestamp() + 
timedelta(seconds=15)])).scalar()
MainThread: 09:15:20,871 INFO  [sqlalchemy.engine.base.Engine] BEGIN 
(implicit)
MainThread: 09:15:20,872 INFO  [sqlalchemy.engine.base.Engine] SELECT 
localtimestamp + %(param_1)s AS anon_1
MainThread: 09:15:20,872 INFO  [sqlalchemy.engine.base.Engine] {'param_1': 
datetime.timedelta(0, 15)}
datetime.datetime(2016, 5, 28, 9, 15, 35, 872999)

*Modern Oracle*, works great with python's timedelta:
>>> DBSession.execute(select([current_timestamp() + 
timedelta(seconds=15)])).scalar()
MainThread: 09:28:15,009 INFO  [sqlalchemy.engine.base.Engine] BEGIN 
(implicit)
MainThread: 09:28:15,010 INFO  [sqlalchemy.engine.base.Engine] SELECT 
systimestamp + :param_1 AS anon_1 FROM DUAL
MainThread: 09:28:15,010 INFO  [sqlalchemy.engine.base.Engine] {'param_1': 
datetime.timedelta(0, 15)}
datetime.datetime(2016, 5, 28, 9, 28, 30, 11530)

*Oracle 8: *no
>>> DBSession.execute(select([current_timestamp() + 
timedelta(seconds=15)])).scalar()
MainThread: 08:29:37,659 INFO  [sqlalchemy.engine.base.Engine] SELECT 
sysdate + :param_1 AS anon_1 FROM DUAL
MainThread: 08:29:37,659 INFO  [sqlalchemy.engine.base.Engine] {'param_1': 
datetime.datetime(1970, 1, 1, 0, 0, 15)}
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/orm/scoping.py",
 
line 114, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File 
"/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py",
 
line 804, in execute
clause, params or {})
  File 
"/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 
line 1450, in execute
params)
  File 
"/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 
line 1583, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 
line 1697, in _execute_context
context)
  File 
"/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 
line 1690, in _execute_context
context)
  File 
"/home/train/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.7.607kbdev-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py",
 
line 335, in do_execute
cursor.execute(statement, parameters)
DatabaseError: (DatabaseError) ORA-00975: date + date not allowed
 'SELECT sysdate + :param_1 AS anon_1 FROM DUAL' {'param_1': 
datetime.datetime(1970, 1, 1, 0, 0, 15)}


I've already constructed a class called "seconds_interval" that works for 
Oracle 8, but I don't know how to use the default behavior when not Oracle 
8:

*Oracle 8 usage:*
>>> DBSession.execute(select([current_timestamp() + *seconds_interval(15)*
])).scalar()
MainThread: 08:37:06,539 INFO  [sqlalchemy.engine.base.Engine] SELECT 
sysdate +* :seconds_1/86400 *AS anon_1 FROM DUAL
MainThread: 08:37:06,539 INFO  [sqlalchemy.engine.base.Engine] 
{'seconds_1': 15}
datetime.datetime(2016, 5, 28, 8, 37, 25)


class seconds_interval(FunctionElement):
type = Interval()
name = 'seconds'

def __init__(self, *args, **kwargs):
FunctionElement.__init__(self, *args, **kwargs)
self.seconds = args[0]

@compiles(seconds_interval)
def compile_seconds_interval(element, compiler, **kw):
if compiler.dialect.name == 'oracle' and \
   compiler.dialect.server_version_info < (9,):
return "%s/86400" % compiler.process(element.clauses)
else:
# use timedelta as bind param
from datetime import timedelta
td = timedelta(seconds=element.seconds)
*return ...???*

Can anyone help me with the else: above to use the native python timedelta 
as a bind param?  (Or trash it completely if there is a better strategy?)

Much thanks in advance!
Kent



-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to

Re: [sqlalchemy] Guaranteeing same connection for scoped session

2016-04-14 Thread Kent Bower
Yeah, it seems to me that if you pass a *specific connection* to a
sessionmaker for some (whatever) reason, that sessionmaker shouldn't ever
silently take a different one.

I'll need to work on detecting or sabotaging new connections from a
sessionmaker which was passed a specific connection.  (I know the obvious
question might be: why even use a sessionmaker/scoped session? and the
answer is that those parts are all well buried in the framework, along with
scopedsession.remove() call, so under certain circumstances only, I want
the session to be guaranteed only one.)

Any help on how to sabotage a sessionmaker/scoped_session to prevent later
creation of a new connection would be appreciated.  Thanks!

On Thu, Apr 14, 2016 at 12:06 AM, Jonathan Vanasco 
wrote:

>
>
> On Wednesday, April 13, 2016 at 7:25:16 PM UTC-4, Mike Bayer wrote:
>>
>> Well scopedsession.remove throws away the session, so yeah either don't
>> call that , or set up the connection immediately on the next session.
>
>
> I thought "this is obvious, the session is closed on `remove`", but then
> dug into the docs -- and I can see how this is misleading.
>
> The narrative docs (
> http://docs.sqlalchemy.org/en/latest/orm/session_api.html#session-and-sessionmaker
> )
>
> Session = sessionmaker()
> # bind an individual session to a connectionsess = Session(bind=connection
>
>
> It's easy to miss the importance of "individual"
>
> And then the API makes it seem like a bind(connection) would persist via
> sessionmaker.
>
>
> http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.sessionmaker.__init__
>
>- *bind*
>
> 
>  –
>a Engine
>
> 
>  or
>other Connectable
>
> 
>  with
>which newly createdSession
>
> 
>  objects
>will be associated.
>
>
> http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session
>
>
>- *bind*
>
> 
>  –
>An optional Engine
>
> 
> or Connection
>
> 
>  to
>which this Session should be bound. When specified, all SQL operations
>performed by this session will execute via this connectable.
>
>
> Unless one were more familiar, the `remove` behavior wouldn't be
> apparent... and the notion of a particular connection being bound to a
> Session Maker might seem like a good thing (it's actually not, because you
> would inherently preclude the utility of connection pools , aside from
>  other effects)
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/WcdRsvBTozk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Guaranteeing same connection for scoped session

2016-04-13 Thread Kent Bower
Will the connection.info dict always be new if a new
underlying raw connection has been grabbed? (Such that I can reliably
detect this situation?)



On Wednesday, April 13, 2016, Mike Bayer <clas...@zzzcomputing.com> wrote:

> Well scopedsession.remove throws away the session, so yeah either don't
> call that , or set up the connection immediately on the next session.
>
> On Wednesday, April 13, 2016, Kent Bower <k...@bowermail.net
> <javascript:_e(%7B%7D,'cvml','k...@bowermail.net');>> wrote:
>
>> About a year ago you helped me ensure my scoped session gets the same
>> connection to the database, which might be important.
>>
>> I found out using "bind=connection" doesn't guarantee the session_maker
>> uses that connection if something went wrong with the session and
>> ScopedSession.remove() was called. Is there a way to guarantee this?
>>
>> See attached script that fails on version 1.0.12
>>
>> Is this the intended behavior when sessionmaker has a specific connection
>> as bind?
>>
>>
>>
>> On Mon, Mar 23, 2015 at 12:40 PM, Michael Bayer <mike...@zzzcomputing.com
>> > wrote:
>>
>>>
>>>
>>> Kent <jkentbo...@gmail.com> wrote:
>>>
>>> > In cases where we interact with the database session (a particular
>>> Connection) to, for example, obtain an application lock which is checked
>>> out from database for the lifetime of the database session (not just the
>>> duration of a transaction), it is important that I guarantee future scoped
>>> session instances get the same connection (and, for example, the
>>> pool_recycle or something else has thrown out that connection and grabbed a
>>> new one).
>>> >
>>> > Please advise me where I can best implement this guarantee.  A Session
>>> subclass's connection() method seems it might be the appropriate place, but
>>> let me know if there is a better recipe.
>>>
>>> you’d want to create that Session associated with the Connection
>>> directly:
>>>
>>> my_session = scoped_session(bind=some_connection)
>>>
>>> then of course make sure you .close() it and .close() the connection at
>>> the end of the use of that session.
>>>
>>>
>>>
>>> >
>>> > The Session.connection() method's docs say:
>>> > "If this Session is configured with autocommit=False, either the
>>> Connection corresponding to the current transaction is returned, or if no
>>> transaction is in progress, a new one is begun and the Connection returned
>>> (note that no transactional state is established with the DBAPI until the
>>> first SQL statement is emitted)."
>>> >
>>> > If the session is one registered in my scoped registry, I'd like to
>>> always return the same connection to guarantee I am using the one with the
>>> database-side checked-out application lock.
>>> >
>>> > What's my best option?
>>> >
>>> > Thanks much!
>>> >
>>> > --
>>> > You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>>> > To post to this group, send email to sqlalchemy@googlegroups.com.
>>> > Visit this group at http://groups.google.com/group/sqlalchemy.
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>> --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "sqlalchemy" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/sqlalchemy/WcdRsvBTozk/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https:/

Re: [sqlalchemy] Guaranteeing same connection for scoped session

2016-04-13 Thread Kent Bower
About a year ago you helped me ensure my scoped session gets the same
connection to the database, which might be important.

I found out using "bind=connection" doesn't guarantee the session_maker
uses that connection if something went wrong with the session and
ScopedSession.remove() was called. Is there a way to guarantee this?

See attached script that fails on version 1.0.12

Is this the intended behavior when sessionmaker has a specific connection
as bind?



On Mon, Mar 23, 2015 at 12:40 PM, Michael Bayer <mike...@zzzcomputing.com>
wrote:

>
>
> Kent <jkentbo...@gmail.com> wrote:
>
> > In cases where we interact with the database session (a particular
> Connection) to, for example, obtain an application lock which is checked
> out from database for the lifetime of the database session (not just the
> duration of a transaction), it is important that I guarantee future scoped
> session instances get the same connection (and, for example, the
> pool_recycle or something else has thrown out that connection and grabbed a
> new one).
> >
> > Please advise me where I can best implement this guarantee.  A Session
> subclass's connection() method seems it might be the appropriate place, but
> let me know if there is a better recipe.
>
> you’d want to create that Session associated with the Connection directly:
>
> my_session = scoped_session(bind=some_connection)
>
> then of course make sure you .close() it and .close() the connection at
> the end of the use of that session.
>
>
>
> >
> > The Session.connection() method's docs say:
> > "If this Session is configured with autocommit=False, either the
> Connection corresponding to the current transaction is returned, or if no
> transaction is in progress, a new one is begun and the Connection returned
> (note that no transactional state is established with the DBAPI until the
> first SQL statement is emitted)."
> >
> > If the session is one registered in my scoped registry, I'd like to
> always return the same connection to guarantee I am using the one with the
> database-side checked-out application lock.
> >
> > What's my best option?
> >
> > Thanks much!
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > Visit this group at http://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/WcdRsvBTozk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.exc import OperationalError

eng = create_engine('postgresql://salespylot:salespylot@localhost:5444/sqla', 
echo=True)
conn=eng.connect()
# bind to specific connection
Session = scoped_session(sessionmaker(bind=conn))

pid = conn.execute("select pg_backend_pid()").scalar()
raw_conn_addr = id(Session.connection().connection.connection)

metadata = MetaData(eng)
rocks_table = Table("rocks", metadata,
Column("id", Integer, primary_key=True),
)
class Rock(object):
pass
mapper(Rock, rocks_table)
metadata.create_all()


Session.query(Rock).all()

# See if normally get same connection
Session.remove()
Session.query(Rock).all()

# all is good: we got original connection again:
assert pid == Session.connection().execute("select pg_backend_pid()").scalar()
assert raw_conn_addr == id(Session.connection().connection.connection)

# something drastic happens to conn
aux_conn=eng.connect()
aux_conn.execute(text("select pg_terminate_backend(:pid)"), 
pid=pid)

try:
Session.query(Rock).all()
except OperationalError as

Re: [sqlalchemy] Guaranteeing same connection for scoped session

2015-03-24 Thread Kent
Thanks very much Mike.

On Monday, March 23, 2015 at 12:40:46 PM UTC-4, Michael Bayer wrote:



 Kent jkent...@gmail.com javascript: wrote: 

  In cases where we interact with the database session (a particular 
 Connection) to, for example, obtain an application lock which is checked 
 out from database for the lifetime of the database session (not just the 
 duration of a transaction), it is important that I guarantee future scoped 
 session instances get the same connection (and, for example, the 
 pool_recycle or something else has thrown out that connection and grabbed a 
 new one). 
  
  Please advise me where I can best implement this guarantee.  A Session 
 subclass's connection() method seems it might be the appropriate place, but 
 let me know if there is a better recipe. 

 you’d want to create that Session associated with the Connection directly: 

 my_session = scoped_session(bind=some_connection) 

 then of course make sure you .close() it and .close() the connection at 
 the end of the use of that session. 



  
  The Session.connection() method's docs say: 
  If this Session is configured with autocommit=False, either the 
 Connection corresponding to the current transaction is returned, or if no 
 transaction is in progress, a new one is begun and the Connection returned 
 (note that no transactional state is established with the DBAPI until the 
 first SQL statement is emitted). 
  
  If the session is one registered in my scoped registry, I'd like to 
 always return the same connection to guarantee I am using the one with the 
 database-side checked-out application lock. 
  
  What's my best option? 
  
  Thanks much! 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to sqlal...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Guaranteeing same connection for scoped session

2015-03-23 Thread Kent
In cases where we interact with the database session (a particular 
Connection) to, for example, obtain an application lock which is checked 
out from database for the lifetime of the database session (not just the 
duration of a transaction), it is important that I guarantee future scoped 
session instances get the same connection (and, for example, the 
pool_recycle or something else has thrown out that connection and grabbed a 
new one).

Please advise me where I can best implement this guarantee.  A Session 
subclass's connection() method seems it might be the appropriate place, but 
let me know if there is a better recipe.

The Session.connection() method's docs say:
If this Session is configured with autocommit=False, either the Connection 
corresponding to the current transaction is returned, or if no transaction 
is in progress, a new one is begun and the Connection returned (note that 
no transactional state is established with the DBAPI until the first SQL 
statement is emitted).

If the session is one registered in my scoped registry, I'd like to *always 
*return the same connection to guarantee I am using the one with the 
database-side checked-out application lock.

What's my best option?

Thanks much!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Database session variables with connection pooling

2015-03-09 Thread Kent
Is it safe, from within either the 'before_execute' or 
'before_cursor_execute' events, to use the same connection to execute a SQL 
statement before the current one?  I assume there is a good chance the 
answer is no, at least for before_cursor_execute.

Why?  I only want to issue the SQL to update the database's session 
variables if needed.  Most connection checkout-checkin life cycles will 
only ever issue SELECT statements and so don't need the database session 
updated for auditing, so I was intending on waiting until I actually know 
it is needed (from within before_cursor_execute) before issuing the 
DBMS_SESSION.SET_CONTEXT(...).  But, once I know that within 
before_cursor_execute, can I (recursively) issue an conn.execute() for that 
statement safely or will it affect the original execute?



On Saturday, March 7, 2015 at 6:38:08 PM UTC-5, Michael Bayer wrote:



 Kent jkent...@gmail.com javascript: wrote: 

  I'm implementing database session variables (in Oracle, 
 DBMS_SESSION.SET_CONTEXT(...)), in order to be able to set (from 
 sqlalchemy) and retrieve (from a database trigger) the application userid 
 and URL path during table audit triggers. 
  
  The tricky bit is that if I set the user to 'user1', that remains in the 
 session in the database even when a different sqlalchemy thread grabs that 
 same session from the connection pool.  I want to prevent the wrong 
 information accidentally still being in the session, so I want to be sure 
 to reset it when appropriate and I'm wondering whether checkout from the 
 Pool is the event you would recommend? 
  
  @event.listens_for(engine, 'checkout') 
  def receive_checkout(dbapi_connection, connection_record, 
 connection_proxy): 
  
  If the same database session is recycled from the connection pool, will 
 it have the same connection_record?  I'd prefer to record the fact that 
 I've set the database session's variables on an object (such as 
 connection_record) so that subsequent requests can detect whether it needs 
 to be reset.  Will connection_record correspond to a database session? 


 For this kind of thing you normally reset the state on the “checkin” 
 event. 
 The connection_record does in fact follow around the DBAPI connection, 
 however the .info dictionary is given here as the primary way to track 
 things with a DBAPI connection. .info is available on Connection, the 
 connection record, and the pool wrapper, and it will track the DBAPI 
 connection for its full lifespan, until the connection is closed. So put 
 whatever memoizations you need into the .info dictionary, and then you can 
 pretty much set / reset the state with any of the pool events. 


  Thanks in advance for any advice here. 
  Kent 
  
  
  
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to sqlal...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Database session variables with connection pooling

2015-03-09 Thread Kent
Perfect, thanks much!

On Saturday, March 7, 2015 at 6:38:08 PM UTC-5, Michael Bayer wrote:



 Kent jkent...@gmail.com javascript: wrote: 

  I'm implementing database session variables (in Oracle, 
 DBMS_SESSION.SET_CONTEXT(...)), in order to be able to set (from 
 sqlalchemy) and retrieve (from a database trigger) the application userid 
 and URL path during table audit triggers. 
  
  The tricky bit is that if I set the user to 'user1', that remains in the 
 session in the database even when a different sqlalchemy thread grabs that 
 same session from the connection pool.  I want to prevent the wrong 
 information accidentally still being in the session, so I want to be sure 
 to reset it when appropriate and I'm wondering whether checkout from the 
 Pool is the event you would recommend? 
  
  @event.listens_for(engine, 'checkout') 
  def receive_checkout(dbapi_connection, connection_record, 
 connection_proxy): 
  
  If the same database session is recycled from the connection pool, will 
 it have the same connection_record?  I'd prefer to record the fact that 
 I've set the database session's variables on an object (such as 
 connection_record) so that subsequent requests can detect whether it needs 
 to be reset.  Will connection_record correspond to a database session? 


 For this kind of thing you normally reset the state on the “checkin” 
 event. 
 The connection_record does in fact follow around the DBAPI connection, 
 however the .info dictionary is given here as the primary way to track 
 things with a DBAPI connection. .info is available on Connection, the 
 connection record, and the pool wrapper, and it will track the DBAPI 
 connection for its full lifespan, until the connection is closed. So put 
 whatever memoizations you need into the .info dictionary, and then you can 
 pretty much set / reset the state with any of the pool events. 


  Thanks in advance for any advice here. 
  Kent 
  
  
  
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to sqlal...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Database session variables with connection pooling

2015-03-07 Thread Kent
I'm implementing database session variables (in Oracle, 
DBMS_SESSION.SET_CONTEXT(...)), in order to be able to set (from 
sqlalchemy) and retrieve (from a database trigger) the application userid 
and URL path during table audit triggers.

The tricky bit is that if I set the user to 'user1', that remains in the 
session in the database even when a different sqlalchemy thread grabs that 
same session from the connection pool.  I want to prevent the wrong 
information accidentally still being in the session, so I want to be sure 
to reset it when appropriate and I'm wondering whether checkout from the 
Pool is the event you would recommend?

@event.listens_for(engine, 'checkout')
def receive_checkout(dbapi_connection, connection_record, 
connection_proxy):

If the same database session is recycled from the connection pool, will it 
have the same *connection_record*?  I'd prefer to record the fact that I've 
set the database session's variables on an object (such as 
connection_record) so that subsequent requests can detect whether it needs 
to be reset.  Will connection_record correspond to a database session?

Thanks in advance for any advice here.
Kent



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Oracle use_ansi=False inner join problem on inline views

2015-01-21 Thread Kent
Mike,

When using use_ansi=False for Oracle (8) in conjunction with joinedload-ing 
an inline view property, SQLAlchemy-0.8.7 renders an inner join instead of 
an outer join.  This has been fixed in SQLAlchemy-0.9.0, but, as I'm not 
prepared for the migration yet, I was hoping and failing to find the bug 
ticket and hopefully a patch.  Do you know when/where this was fixed and 
whether the fix would be patch'able in 0.7 or at least 0.8?

The attached script runs on 0.9.0+ but the assertion fails on 0.8.7.

The only difference in SQL output is the outer join (+):

SELECT products.productid AS products_productid, anon_1.productid AS 
anon_1_productid, anon_1.siteid AS anon_1_siteid, anon_1.qty AS anon_1_qty
FROM products, (SELECT inventory.productid AS productid, inventory.siteid 
AS siteid, sum(inventory.qty) AS qty
FROM inventory GROUP BY inventory.productid, inventory.siteid) anon_1
WHERE anon_1.productid*(+)* = products.productid ORDER BY anon_1.siteid

Interestingly, use-ansi=True correctly renders LEFT OUTER JOIN in 0.8.7 
but it fails to render as an outer join with use-ansi=False.

Thanks for you time and exceptional software,
Kent


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import *
from sqlalchemy.orm import *

eng_url = 'oracle://kent:kent@localhost:1521/xe?use_ansi=False'

engine = create_engine(eng_url, echo=True)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)


products_table = Table(products, metadata,
Column(productid, Unicode(255), primary_key=True),
)


inventory_table = Table(inventory, metadata,
Column(inventoryid, Integer, primary_key=True),

Column(productid, Unicode(255), ForeignKey('products.productid'), nullable=False),

Column(siteid, Unicode(255), nullable=False),

Column(qty, Integer, nullable=False),
)


def repr_attrs(obj, *attrs):
return '%s: ' % obj.__class__.__name__ +  \
' '.join('{0[%s]}=[{1[%s]}]' % (i,i) for i in range(len(attrs)))\
.format(attrs, map(obj.__dict__.get, attrs)) + 


class Base(object):
def __init__(self, session, **attrs):
self.__dict__.update(attrs)
session.add(self)


class SiteStockLevel(object):
def __repr__(self):
return repr_attrs(self,'productid','siteid','qty')


class Product(Base):
def __repr__(self):
return repr_attrs(self,'productid')


class Inventory(Base):
pass


sitestocklevels_view = select([
inventory_table.c.productid, 
inventory_table.c.siteid, 
func.sum(inventory_table.c.qty).label('qty')],
group_by=[inventory_table.c.productid, inventory_table.c.siteid]).alias('sitestocklevels')


mapper(Inventory, inventory_table)


mapper(Product, products_table, 
properties={
'sitestocklevels': relationship(SiteStockLevel,
primaryjoin=sitestocklevels_view.c.productid==products_table.c.productid,
order_by=sitestocklevels_view.c.siteid,
viewonly=True),
})


mapper(SiteStockLevel, sitestocklevels_view, 
primary_key=[sitestocklevels_view.c.productid, sitestocklevels_view.c.siteid])


metadata.create_all()
try:
sess = Session()
Product(sess, productid=u'SKUA')
Product(sess, productid=u'SKUB')
sess.commit()
Inventory(sess, inventoryid=1, productid=u'SKUA', siteid=u'S1', qty=1)
Inventory(sess, inventoryid=2, productid=u'SKUA', siteid=u'S1', qty=2)
Inventory(sess, inventoryid=3, productid=u'SKUA', siteid=u'S1', qty=3)
Inventory(sess, inventoryid=4, productid=u'SKUA', siteid=u'S2', qty=1)
sess.commit()

allproducts = sess.query(Product).options(joinedload(Product.sitestocklevels)).all()

assert len(allproducts) == 2
 
finally:
metadata.drop_all()



Re: [sqlalchemy] Oracle use_ansi=False inner join problem on inline views

2015-01-21 Thread Kent
So, in 0.7 expression.py has a non-public, underscored class 
_FromGrouping instead of the public class FromGrouping

That should be just fine to use, correct?


On Wednesday, January 21, 2015 at 3:56:12 PM UTC-5, Michael Bayer wrote:

 wow that is awful, how often do I fix a full blown bug, even write tests 
 for it, and don’t put anything in the changelog, no bug report or anything. 

 You can probably patch it to 0.8.  Not sure about 0.7, but if 0.7 doesn’t 
 have FromGrouping there should still be some similar concept that can be 
 tested for here.I’m not maintaining either except for security issues 
 back to 0.8.  there’s a new website section coming soon that will show this 
 stuff. 



 Kent jkent...@gmail.com javascript: wrote: 

  Here it is: 
  
  commit 85368d25ed158c85bd19f4a63400884ab1cda26a 
  Author: Mike Bayer m... 
  Date:   Sat Jun 8 18:54:14 2013 -0400 
  
  get nested joins to render on oracle 8 
  
  
  
  Sounds like the right commit notes.  You still maintaining 0.8?  Should 
 that change be patchable in 0.7?   
  
  
  On Wednesday, January 21, 2015 at 1:13:21 PM UTC-5, Michael Bayer wrote: 
  
  
  Kent jkent...@gmail.com wrote: 
  
   Mike, 
   
   When using use_ansi=False for Oracle (8) in conjunction with 
 joinedload-ing an inline view property, SQLAlchemy-0.8.7 renders an inner 
 join instead of an outer join.  This has been fixed in SQLAlchemy-0.9.0, 
 but, as I'm not prepared for the migration yet, I was hoping and failing to 
 find the bug ticket and hopefully a patch.  Do you know when/where this was 
 fixed and whether the fix would be patch'able in 0.7 or at least 0.8? 
   
   The attached script runs on 0.9.0+ but the assertion fails on 0.8.7. 
   
   The only difference in SQL output is the outer join (+): 
   
   SELECT products.productid AS products_productid, anon_1.productid AS 
 anon_1_productid, anon_1.siteid AS anon_1_siteid, anon_1.qty AS anon_1_qty 
   FROM products, (SELECT inventory.productid AS productid, 
 inventory.siteid AS siteid, sum(inventory.qty) AS qty 
   FROM inventory GROUP BY inventory.productid, inventory.siteid) anon_1 
   WHERE anon_1.productid(+) = products.productid ORDER BY anon_1.siteid 
   
   Interestingly, use-ansi=True correctly renders LEFT OUTER JOIN in 
 0.8.7 but it fails to render as an outer join with use-ansi=False. 
   
   Thanks for you time and exceptional software, 
   Kent 
   
  
  
  no specific fixes are logged, however 
 http://docs.sqlalchemy.org/en/rel_0_9/changelog/migration_09.html#many-join-and-left-outer-join-expressions-will-no-longer-be-wrapped-in-select-from-as-anon-1
  
 refers to a very large change in how the ORM decides to join things.   
  That would cause some kinds of joinedloads to render differently , which 
 would impact how (+) comes out as well, but i wouldn’t think it would have 
 the effect that the missing (+) is the only change, it would be more than 
 that. 
  
  So I have no better idea than you, so the method I’d do is just to git 
 bisect (http://git-scm.com/docs/git-bisect) through revisions until you 
 find the fix.  If it’s a big merge revision, I can look into it to find 
 something specific, but if you can get me a rev ID that would be a good 
 start. 
  
  
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to sqlal...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Oracle use_ansi=False inner join problem on inline views

2015-01-21 Thread Kent
Here it is:

commit 85368d25ed158c85bd19f4a63400884ab1cda26a
Author: Mike Bayer m...
Date:   Sat Jun 8 18:54:14 2013 -0400

get nested joins to render on oracle 8



Sounds like the right commit notes.  You still maintaining 0.8?  Should 
that change be patchable in 0.7?  


On Wednesday, January 21, 2015 at 1:13:21 PM UTC-5, Michael Bayer wrote:



 Kent jkent...@gmail.com javascript: wrote: 

  Mike, 
  
  When using use_ansi=False for Oracle (8) in conjunction with 
 joinedload-ing an inline view property, SQLAlchemy-0.8.7 renders an inner 
 join instead of an outer join.  This has been fixed in SQLAlchemy-0.9.0, 
 but, as I'm not prepared for the migration yet, I was hoping and failing to 
 find the bug ticket and hopefully a patch.  Do you know when/where this was 
 fixed and whether the fix would be patch'able in 0.7 or at least 0.8? 
  
  The attached script runs on 0.9.0+ but the assertion fails on 0.8.7. 
  
  The only difference in SQL output is the outer join (+): 
  
  SELECT products.productid AS products_productid, anon_1.productid AS 
 anon_1_productid, anon_1.siteid AS anon_1_siteid, anon_1.qty AS anon_1_qty 
  FROM products, (SELECT inventory.productid AS productid, 
 inventory.siteid AS siteid, sum(inventory.qty) AS qty 
  FROM inventory GROUP BY inventory.productid, inventory.siteid) anon_1 
  WHERE anon_1.productid(+) = products.productid ORDER BY anon_1.siteid 
  
  Interestingly, use-ansi=True correctly renders LEFT OUTER JOIN in 
 0.8.7 but it fails to render as an outer join with use-ansi=False. 
  
  Thanks for you time and exceptional software, 
  Kent 
  


 no specific fixes are logged, however 
 http://docs.sqlalchemy.org/en/rel_0_9/changelog/migration_09.html#many-join-and-left-outer-join-expressions-will-no-longer-be-wrapped-in-select-from-as-anon-1
  
 refers to a very large change in how the ORM decides to join things.   
  That would cause some kinds of joinedloads to render differently , which 
 would impact how (+) comes out as well, but i wouldn’t think it would have 
 the effect that the missing (+) is the only change, it would be more than 
 that. 

 So I have no better idea than you, so the method I’d do is just to git 
 bisect (http://git-scm.com/docs/git-bisect) through revisions until you 
 find the fix.  If it’s a big merge revision, I can look into it to find 
 something specific, but if you can get me a rev ID that would be a good 
 start. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Oracle use_ansi=False inner join problem on inline views

2015-01-21 Thread Kent
Hmmm 0.7 is missing expression.FromGrouping... I imagine that is a big 
deal, isn't it, like not really patchable?



On Wednesday, January 21, 2015 at 3:11:29 PM UTC-5, Kent wrote:

 Here it is:

 commit 85368d25ed158c85bd19f4a63400884ab1cda26a
 Author: Mike Bayer m...
 Date:   Sat Jun 8 18:54:14 2013 -0400

 get nested joins to render on oracle 8



 Sounds like the right commit notes.  You still maintaining 0.8?  Should 
 that change be patchable in 0.7?  


 On Wednesday, January 21, 2015 at 1:13:21 PM UTC-5, Michael Bayer wrote:



 Kent jkent...@gmail.com wrote: 

  Mike, 
  
  When using use_ansi=False for Oracle (8) in conjunction with 
 joinedload-ing an inline view property, SQLAlchemy-0.8.7 renders an inner 
 join instead of an outer join.  This has been fixed in SQLAlchemy-0.9.0, 
 but, as I'm not prepared for the migration yet, I was hoping and failing to 
 find the bug ticket and hopefully a patch.  Do you know when/where this was 
 fixed and whether the fix would be patch'able in 0.7 or at least 0.8? 
  
  The attached script runs on 0.9.0+ but the assertion fails on 0.8.7. 
  
  The only difference in SQL output is the outer join (+): 
  
  SELECT products.productid AS products_productid, anon_1.productid AS 
 anon_1_productid, anon_1.siteid AS anon_1_siteid, anon_1.qty AS anon_1_qty 
  FROM products, (SELECT inventory.productid AS productid, 
 inventory.siteid AS siteid, sum(inventory.qty) AS qty 
  FROM inventory GROUP BY inventory.productid, inventory.siteid) anon_1 
  WHERE anon_1.productid(+) = products.productid ORDER BY anon_1.siteid 
  
  Interestingly, use-ansi=True correctly renders LEFT OUTER JOIN in 
 0.8.7 but it fails to render as an outer join with use-ansi=False. 
  
  Thanks for you time and exceptional software, 
  Kent 
  


 no specific fixes are logged, however 
 http://docs.sqlalchemy.org/en/rel_0_9/changelog/migration_09.html#many-join-and-left-outer-join-expressions-will-no-longer-be-wrapped-in-select-from-as-anon-1
  
 refers to a very large change in how the ORM decides to join things.   
  That would cause some kinds of joinedloads to render differently , which 
 would impact how (+) comes out as well, but i wouldn’t think it would have 
 the effect that the missing (+) is the only change, it would be more than 
 that. 

 So I have no better idea than you, so the method I’d do is just to git 
 bisect (http://git-scm.com/docs/git-bisect) through revisions until you 
 find the fix.  If it’s a big merge revision, I can look into it to find 
 something specific, but if you can get me a rev ID that would be a good 
 start. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] DISTINCT with LIMIT problem

2013-05-31 Thread Kent
Thanks very much!  I got it to work apparently fine using from_self(). 
I didn't seem to need anything special for eager loads to continue to 
function... were you only expecting I'd have troubles with eager loads 
if I used subquery()?


On 5/30/2013 6:29 PM, Michael Bayer wrote:


On May 30, 2013, at 6:06 PM, Kent jkentbo...@gmail.com wrote:


Thank you, I'll try that, but quick concern:  I specifically skipped trying to use 
.subquery() because the docs say Eager JOIN generation within the query is 
disabled.

Doesn't that mean I won't get my joinedload() results from the inner query?

Or does that refer to the outer query having eager join disabled?



if you want to eager load also from that subquery, you need to sitck it into an 
aliased:


MySubqClass = aliased(MyClass, subq)

query(x, MySubqClass).options(joinedload(MySubqClass.foobar))







On 5/30/2013 5:54 PM, Michael Bayer wrote:


On May 30, 2013, at 5:19 PM, Kent jkentbo...@gmail.com
mailto:jkentbo...@gmail.com wrote:



Solution A:

Group by all columns (yielding the same effect as distinct), but
which makes the window analytical function process *after* the group
by and yields the correct count (17 instead of 72):


are all those columns indexed?  even if they are, crappy query...



OR Solution B:

Put the count(*) over () in an outer select, like this:

select count(*) over () as recordcount, anon.*
from (
  select distinct tablea.colx, tableb.coly
  from tablea, tableb
) as anon
limit 100


this is very much how SQLAlchemy wants you to do it.



Either solution yields the correct answer I believe, but I'm having
difficulty translating the SQL that I know will work into sqlalchemy land.


For Solution B, I don't know how to wrap my query in an outer select
(similar to the LIMIT implementation for Oracle) in a way that will
still allow sqlalchemy to extract rows into instances:


from_self() can do this (probably use add_column() for the window
function), or subquery() should work very well.   subq = q.subquery(); q
= query(func.count('*').over().label(..), subq); .

send me a quick example if that's not working and I'll work out the query.

--
You received this message because you are subscribed to a topic in the
Google Groups sqlalchemy group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/sqlalchemy/_U28GXXR6sg/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to
sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.






--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] DISTINCT with LIMIT problem

2013-05-31 Thread Kent
I allow the user to join with other tables for the purpose of filtering 
(even though the joined tables won't be selected).  Cartesian is 
probably the wrong term for the effect, but in the end, I get duplicate 
rows.  I could get rid of the need for distinct by extensively using 
EXISTS clauses instead of joins; this is true.


But when several tables are chained to together with joins, I expect 
using EXISTS to become less manageable and to perform poorer.  (I could 
be wrong on both accounts.)


For example, our interface may allow the query of Employee records.  But 
the user might join with the EmailAddress table to strictly filter results.


Employee records:
idname
  
1 kent
2 charlie

EmailAddress records:
empid   address
===  
1k...@mymail.goo
1k...@mymail.goo
1k...@gmail.de
2char...@gmail.de

session.query(Employee).join(EmailAddress).filter(EmailAddress.contains('@'))

Remember, we are only selecting emp.id, emp.name (but joining with 
another table).


So without DISTINCT:

idname
  
1 kent
1 kent
1 kent
2 charlie

With DISTINCT:

idname
  
1 kent
2 charlie


Like I say, using EXISTS would remove the need for DISTINCT, but I 
haven't gone down that path...



On 5/31/2013 8:41 AM, Charlie Clark wrote:

Am 30.05.2013, 23:19 Uhr, schrieb Kent jkentbo...@gmail.com:


For example, a query may look like this:



select distinct
  count(*) over () as recordcount, tablea.colx, tableb.coly
from tablea, tableb
where 
limit 100



This doesn't *quite* work because the analytical window function count(*)
over() is applied *before* the distinct, so the count returns the wrong
number (a Cartesian effect, returning 72 instead of 17, in this example).


Why are you generating Cartesian products? DISTINCT is designed to work
on denormalised result sets, ie. those which can contain duplicates.
Can't you avoid this with a join between your tables?

Charlie


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] DISTINCT with LIMIT problem

2013-05-30 Thread Kent
We use func.count().over() in order to help support result pagination.   
When attempting to limit the result set, I have found that if other tables 
are being joined (for the where clause, but not selected), then I need to 
add DISTINCT to the query or else the Cartesian result of my query messes 
up LIMIT.  (There are, say, 72 rows returned, where this only represents 17 
distinct records, for example.)

For example, a query may look like this:

select distinct 
  count(*) over () as recordcount, tablea.colx, tableb.coly
from tablea, tableb
where 
limit 100

This doesn't *quite* work because the analytical window function count(*) 
over() is applied *before* the distinct, so the count returns the wrong 
number (a Cartesian effect, returning 72 instead of 17, in this example).  

I have two potential solutions: 

Solution A:

Group by all columns (yielding the same effect as distinct), but which 
makes the window analytical function process *after* the group by and 
yields the correct count (17 instead of 72):

select count(*) over () as recordcount, tablea.colx, tableb.coly
from tablea, tableb
where ...
group by tablea.colx, tableb.coly *[all columns]*
limit 100

OR Solution B:

Put the count(*) over () in an outer select, like this:

select count(*) over () as recordcount, anon.* 
from (
  select distinct tablea.colx, tableb.coly
  from tablea, tableb
) as anon
limit 100

Either solution yields the correct answer I believe, but I'm having 
difficulty translating the SQL that I know will work into sqlalchemy land.

For Solution A, in the case of wanting to group by, I don't know how to get 
the full list of all selected columns to add to the group_by in such a way 
that even joinedload() will be included in the group by:

q = Session.query(class).join(joins).filter(...).option(joinedload(...))

q = q.group_by(* ??? How to tell sqlalchemy to group by all selected 
columns, even those which will be join loaded ???* )  

q = q.add_column(func.count().over().label('recordcount'))

For Solution B, I don't know how to wrap my query in an outer select 
(similar to the LIMIT implementation for Oracle) in a way that will still 
allow sqlalchemy to extract rows into instances:

This renders the correct SQL, I think:
qry = Session.query(qry.with_labels().statement, 
func.count().over().label('recordcount'))  

But I'm using SQL statement here so sqlalchemy won't translate result rows 
into object instances.

Can you point me in the right direction for one of these 2 solutions, 
please?

Many thanks,
Kent

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] DISTINCT with LIMIT problem

2013-05-30 Thread Kent
Thank you, I'll try that, but quick concern:  I specifically skipped 
trying to use .subquery() because the docs say Eager JOIN generation 
within the query is disabled.


Doesn't that mean I won't get my joinedload() results from the inner query?

Or does that refer to the outer query having eager join disabled?


On 5/30/2013 5:54 PM, Michael Bayer wrote:


On May 30, 2013, at 5:19 PM, Kent jkentbo...@gmail.com
mailto:jkentbo...@gmail.com wrote:



Solution A:

Group by all columns (yielding the same effect as distinct), but
which makes the window analytical function process *after* the group
by and yields the correct count (17 instead of 72):


are all those columns indexed?  even if they are, crappy query...



OR Solution B:

Put the count(*) over () in an outer select, like this:

select count(*) over () as recordcount, anon.*
from (
  select distinct tablea.colx, tableb.coly
  from tablea, tableb
) as anon
limit 100


this is very much how SQLAlchemy wants you to do it.



Either solution yields the correct answer I believe, but I'm having
difficulty translating the SQL that I know will work into sqlalchemy land.


For Solution B, I don't know how to wrap my query in an outer select
(similar to the LIMIT implementation for Oracle) in a way that will
still allow sqlalchemy to extract rows into instances:


from_self() can do this (probably use add_column() for the window
function), or subquery() should work very well.   subq = q.subquery(); q
= query(func.count('*').over().label(..), subq); .

send me a quick example if that's not working and I'll work out the query.

--
You received this message because you are subscribed to a topic in the
Google Groups sqlalchemy group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/sqlalchemy/_U28GXXR6sg/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to
sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Undefer Hybrid Attributes

2013-03-07 Thread Kent
I notice that Hybrid Attributes don't show up as mapper properties (since 
they are class wide instead of mapper specific, I suppose).  I couldn't 
find documentation on whether I can undefer these?  Or can I create a 
synonym or column_property from a hybrid attribute in the mapper?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: Undefer Hybrid Attributes

2013-03-07 Thread Kent
I suppose what I'm really after is a column_property (for class level) and 
plain descriptor (for instance level), which is exactly what Hybrid 
attributes are meant to be, but I wanted them to be part of the mapper and 
undeferred in some cases.

On Thursday, March 7, 2013 11:36:37 AM UTC-5, Kent wrote:

 I notice that Hybrid Attributes don't show up as mapper properties (since 
 they are class wide instead of mapper specific, I suppose).  I couldn't 
 find documentation on whether I can undefer these?  Or can I create a 
 synonym or column_property from a hybrid attribute in the mapper?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Undefer Hybrid Attributes

2013-03-07 Thread Kent Bower
That makes sense,
Thanks,
Kent

On Mar 7, 2013, at 12:09 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 The hybrid attribute is a Python function that invokes when it's called.  So 
 it doesn't make sense for it to be a column property since there is no 
 attribute to be populated.  Undeferred also doesn't make any sense because 
 the hybrid already calls a local in-Python function when accessed at the 
 instance level, not a database call.
 
 A traditional column_property() can be deferred or undeterred, and when 
 called at the instance level will emit SQL to the database.
 
 If you have an attribute that should only populate via SQL, then you need to 
 just use a column_property().
 
 
 
 On Mar 7, 2013, at 11:42 AM, Kent jkentbo...@gmail.com wrote:
 
 I suppose what I'm really after is a column_property (for class level) and 
 plain descriptor (for instance level), which is exactly what Hybrid 
 attributes are meant to be, but I wanted them to be part of the mapper and 
 undeferred in some cases.
 
 On Thursday, March 7, 2013 11:36:37 AM UTC-5, Kent wrote:
 
 I notice that Hybrid Attributes don't show up as mapper properties (since 
 they are class wide instead of mapper specific, I suppose).  I couldn't 
 find documentation on whether I can undefer these?  Or can I create a 
 synonym or column_property from a hybrid attribute in the mapper?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 -- 
 You received this message because you are subscribed to a topic in the Google 
 Groups sqlalchemy group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/sqlalchemy/kO6KS88-2xU/unsubscribe?hl=en.
 To unsubscribe from this group and all its topics, send an email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] query of existing object won't refresh values (even when FOR UPDATE)

2013-03-05 Thread Kent
By design, when a query() fetches an existing object, it doesn't refresh 
the values unless populate_existing() is included with the query.  The 
documentation for populate_existing() states it isn't meant for general 
purpose.

Occasionally, however, objects need to be selected FOR UPDATE, 
with_lockmode('update'), to guarantee against timing problems with 
concurrent database users, particularly when the record fetched is used as 
a base for the update.  For example, if I need to update a record's 
quantity field by a delta of +5, it is extremely important that I have the 
most recent Object.quantity datum; truly, that was the reason I used 
with_lockmode('update') in the first place.

I get uneasy to think that the user needs to remember to invoke 
populate_existing() when selecting a record FOR UPDATE to guard against the 
possibility that there is a stale version of the instance in the session.

I will likely add that to our framework's Query subclass, but thought you 
might consider the implications here. Generally, sqla is extremely 
optimistic regarding locks, but in the event when the user is specifying 
with_lockmode('update'), we've left the realm of optimistic locking and 
entered pessimistic, so it seems reasonable to consider automatically 
enforcing populate_existing() when with_lockmode('update') is used?  

Something to consider; I'd be interested in your thoughts.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: query of existing object won't refresh values (even when FOR UPDATE)

2013-03-05 Thread Kent
I imagine this gets ugly when autoflush is disabled... perhaps that is why 
it requires populate_existing()?

On Tuesday, March 5, 2013 10:00:36 AM UTC-5, Kent wrote:

 By design, when a query() fetches an existing object, it doesn't refresh 
 the values unless populate_existing() is included with the query.  The 
 documentation for populate_existing() states it isn't meant for general 
 purpose.

 Occasionally, however, objects need to be selected FOR UPDATE, 
 with_lockmode('update'), to guarantee against timing problems with 
 concurrent database users, particularly when the record fetched is used as 
 a base for the update.  For example, if I need to update a record's 
 quantity field by a delta of +5, it is extremely important that I have the 
 most recent Object.quantity datum; truly, that was the reason I used 
 with_lockmode('update') in the first place.

 I get uneasy to think that the user needs to remember to invoke 
 populate_existing() when selecting a record FOR UPDATE to guard against the 
 possibility that there is a stale version of the instance in the session.

 I will likely add that to our framework's Query subclass, but thought you 
 might consider the implications here. Generally, sqla is extremely 
 optimistic regarding locks, but in the event when the user is specifying 
 with_lockmode('update'), we've left the realm of optimistic locking and 
 entered pessimistic, so it seems reasonable to consider automatically 
 enforcing populate_existing() when with_lockmode('update') is used?  

 Something to consider; I'd be interested in your thoughts.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] query of existing object won't refresh values (even when FOR UPDATE)

2013-03-05 Thread Kent
I had forgotten the danger of populate_existing() also because we make 
it automatically issue a flush() for these purposes.  Documentation note 
sounds good.


On 3/5/2013 10:45 AM, Michael Bayer wrote:

populate_existing() blows away any pending changes on the object so
turning it on by default would be a surprise in a lot of cases.

typically if someone is working with FOR UPDATE they're already
programming a very specific section in a very careful manner, it's not
something that's done casually.  I would think that it would be used to
select a row right at the start of a transaction, that is definitely not
already in the Session.

in my own experience, any time I've actually tried to work with
pessimistic locking my application ends up deadlocking at 2 AM, so I
can't say I have the background to really say how this method should be
used.  At this point, a documentation note is certainly something doable
(because I would hope very much that anyone using this method has read
the documentation carefully).




On Mar 5, 2013, at 10:00 AM, Kent jkentbo...@gmail.com
mailto:jkentbo...@gmail.com wrote:


By design, when a query() fetches an existing object, it doesn't
refresh the values unless populate_existing() is included with the
query.  The documentation for populate_existing() states it isn't
meant for general purpose.

Occasionally, however, objects need to be selected FOR UPDATE,
with_lockmode('update'), to guarantee against timing problems with
concurrent database users, particularly when the record fetched is
used as a base for the update.  For example, if I need to update a
record's quantity field by a delta of +5, it is extremely important
that I have the most recent Object.quantity datum; truly, that was the
reason I used with_lockmode('update') in the first place.

I get uneasy to think that the user needs to remember to invoke
populate_existing() when selecting a record FOR UPDATE to guard
against the possibility that there is a stale version of the instance
in the session.

I will likely add that to our framework's Query subclass, but thought
you might consider the implications here. Generally, sqla is extremely
optimistic regarding locks, but in the event when the user is
specifying with_lockmode('update'), we've left the realm of optimistic
locking and entered pessimistic, so it seems reasonable to consider
automatically enforcing populate_existing() when
with_lockmode('update') is used?

Something to consider; I'd be interested in your thoughts.

--
You received this message because you are subscribed to the Google
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com
mailto:sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to a topic in the
Google Groups sqlalchemy group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/sqlalchemy/I2ftUVJcAuo/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to
sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] orm_exc.NoResultFound safe with autoflush?

2013-02-08 Thread Kent
We often use this pattern:
 
try:
  session.query().one()
except orm_exc.NoResultFound:
  gracefully deal with it
 
If the query() execution causes an autoflush, I just want to make sure that 
an autoflush will never raise orm_exc.NoResultFound, or we could be 
catching the wrong error.  Were that the case, to be safe, we'd always need:
 
session.flush()
try:
  session.query().one()
except orm_exc.NoResultFound:
  gracefully deal with it

 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] RE: 'Hand Coded Applications with SQLAlchemy' video

2012-11-30 Thread Kent Tenney
Howdy,

I'm a Python programmer setting out to put a bunch of stuff into
a Postgres db, studying the video for direction.

I wonder if any of the practices described have changed
substantially since it was recorded.

Just checked and found it's from Pycon 2012, so probably not,
though SA seems to evolve quickly, I'm using trunk.

Thanks,
Kent

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] changes flushed for expunged relationships

2012-10-18 Thread Kent

Thank you for the clarifications.

On 10/18/2012 11:43 AM, Michael Bayer wrote:

On Oct 17, 2012, at 9:55 PM, Kent wrote:


The attached script fails with sqlalchemy.exc.InvalidRequestError: Instance 'Bug at 
0x1e6f3d10' has been deleted.  Use the make_transient() function to send this object back 
to the transient state.

While this example is somewhat convoluted, I have a few questions about 
sqlalchemy behavior here:

1) At the session.flush(), even though the Rock and the bugs relationship have 
been expunged, the pending delete still is issued to the database.  Would you 
expect/intend sqlalchemy to delete even after the expunge()?

no, because the Rock you have merged has established that the Bug is no longer 
associated with it.   You expunge the Rock, there's no Bug attached to it to be 
expunged.


2) After the flush(), shouldn't the history of the 'bugs' relationship have 
been updated to reflect the statement issued to the database?  (See print 
statement)

yes, because merged is not in that Session anymore.The flush() is what 
resets the history.  Clearly it's not going to go out to find objects that 
aren't in that Session.


3) The InvalidRequestError is only raised if the 'bugs' relationship has a 
backref, otherwise it isn't raised.  Any idea why?

removing the Bug.rock path means that when you expunge merged, there is 
nothing left in the flush process to handle the orphan cascade you're looking for here.   
The DELETE does not occur so the Bug that's present on merged can go right back in.


4) Don't hate me for asking: is there a work around?  I'm trying to understand 
this scenario since in a rare case, it presents.

The workaround is don't use expunge().This is not a method I ever use for 
anything, actually. And especially, if you manipulate the state within the 
Session, then expunge() random segments of that state, you can hardly expect 
flush() to have a clear idea of what you intend.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] changes flushed for expunged relationships

2012-10-18 Thread Kent
On Thursday, October 18, 2012 11:43:50 AM UTC-4, Michael Bayer wrote:


 On Oct 17, 2012, at 9:55 PM, Kent wrote: 

  The attached script fails with sqlalchemy.exc.InvalidRequestError: 
 Instance 'Bug at 0x1e6f3d10' has been deleted.  Use the make_transient() 
 function to send this object back to the transient state. 
  
  While this example is somewhat convoluted, I have a few questions about 
 sqlalchemy behavior here: 
  
  1) At the session.flush(), even though the Rock and the bugs 
 relationship have been expunged, the pending delete still is issued to the 
 database.  Would you expect/intend sqlalchemy to delete even after the 
 expunge()? 

 no, because the Rock you have merged has established that the Bug is no 
 longer associated with it.   You expunge the Rock, there's no Bug attached 
 to it to be expunged. 


I am still slightly unclear on this: since sqlalchemy *does *delete the 
Bug, I assume it is marked for deletion in some way when the 'bugs' 
relationship is merged to the empty list [].  Is that accurate?  Please 
help me understand why it does not show up in session.deleted, which is 
IdentitySet([])?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/pnnq3lOodtYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] changes flushed for expunged relationships

2012-10-17 Thread Kent
The attached script fails with sqlalchemy.exc.InvalidRequestError: 
Instance 'Bug at 0x1e6f3d10' has been deleted.  Use the make_transient() 
function to send this object back to the transient state.

While this example is somewhat convoluted, I have a few questions about 
sqlalchemy behavior here:

1) At the session.flush(), even though the Rock and the bugs relationship 
have been expunged, the pending delete still is issued to the database.  
Would you expect/intend sqlalchemy to delete even after the expunge()?

2) After the flush(), shouldn't the history of the 'bugs' relationship have 
been updated to reflect the statement issued to the database?  (See print 
statement)

3) The InvalidRequestError is only raised if the 'bugs' relationship has a 
backref, otherwise it isn't raised.  Any idea why?

4) Don't hate me for asking: is there a work around?  I'm trying to 
understand this scenario since in a rare case, it presents.

Thanks very much!
Kent

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/6oYSFMbpnEsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.orm import attributes

engine = create_engine('sqlite:///', echo=True)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

rocks_table = Table(rocks, metadata,
Column(id, Integer, primary_key=True),
)

bugs_table = Table(bugs, metadata,
Column(id, Integer, primary_key=True),
Column(rockid, Integer, ForeignKey('rocks.id')),
)

class Object(object):
def __init__(self, **attrs):
self.__dict__.update(attrs)

class Rock(Object):
def __repr__(self):
return 'Rock: id=[%s]' % self.__dict__.get('id')

class Bug(Object):
def __repr__(self):
return 'Bug: id=[%s]' % self.__dict__.get('id')

mapper(Rock, rocks_table,
properties={'bugs': relationship(Bug,
cascade='all,delete-orphan',
backref=backref('rock',cascade='refresh-expire,expunge'))
})

mapper(Bug, bugs_table)

metadata.create_all()
try:
session = Session()
r = Rock(id=1)
r.bugs=[Bug(id=1)]
session.add(r)
session.commit()

session = Session()
r = Rock(id=1)
r.bugs=[]
merged = session.merge(r)
session.expunge(merged)
# if merged is now detached, should flush() still delete Bug?
session.flush()
# should history still have deleted Bug?
print \n\nadd: %r\nunchanged: %r\ndelete: %r\n % attributes.get_history(merged, 'bugs')

# this only fails if the backref 'rock' is present in relationship
session.add(merged)

finally:
metadata.drop_all()


Re: [sqlalchemy] live access to postgis database to use in ExtJS, OpenLayers, etc

2012-10-03 Thread Kent Tenney
It sounds like you are trying to do at least 6 quite complicated
things all at once, without really understanding any of them. This
will not be easy.

Sigh. The story of my life in one sentence.

On Tue, Oct 2, 2012 at 8:43 AM, Simon King si...@simonking.org.uk wrote:
 On Mon, Oct 1, 2012 at 11:38 PM, Gery . gameji...@hotmail.com wrote:

 thanks but I want to use that live access to search at first through
 ExtJS/GeoExtJS/OpenLayers and through them there is only a url available
 (protocol HTTP), I also need to get the data as GeoJSON, so I think
 GeoAlchemy might not be the right solution, I think. If I'm wrong, please
 I'd love some points about it, thanks.


 It sounds like you are trying to do at least 6 quite complicated
 things all at once, without really understanding any of them. This
 will not be easy.

 1. The client side of your application is presumably written in
 Javascript and HTML, using javascript libraries such as ExtJS and
 OpenLayers. You need to fully understand how these work.

 2. The application will then make HTTP requests to a web server. You
 need to understand at least the basics of HTTP.

 3. The web server might be a single python script, or it could be
 something running behind Apache. You need to understand your web
 server.

 4. The server side of your application might be using any of a number
 of libraries to connect to the web server (such as
 Django/Pyramid/Flask/cgi/mod_wsgi etc.). You need to understand
 whatever mechanism your application is using to speak HTTP.

 5. Your application can use SQLAlchemy and GeoAlchemy to retrieve data
 from postgis into Python data structures. You will need to understand
 postgis, SQLAlchemy, GeoAlchemy and Python.

 6. Your application can then convert those Python data structures into
 GeoJSON. You will need to understand GeoJSON.

 The SQLAlchemy mailing list can help you with exactly one part of this
 (step 5). SQLAlchemy (and GeoAlchemy) is perfectly capable of querying
 multiple tables and retrieving results. But how you accept the HTTP
 request, and how you pass the results back, are completely outside the
 scope of this list and I'm afraid you are unlikely to find much help
 here with it.

 Sorry I can't be more help,

 Simon

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] safe usage within before_update/before_insert events

2012-09-12 Thread Kent
You've mentioned multiple times (to me and others) that some operations, 
such as reaching across relationships or loading relationships from within 
a before_update Mapper event is not safe.


   - I understand this is safe from within Session event before_flush(), 
   correct?
   - We mentioned at some point adding a Mapper level before_flush() 
   event.  To avoid duplicate work, has that been done?


   - I presume that other queries, particularly those with 
   populate_existing() are also unsafe from within before_update?  Are such 
   safe from before_flush()?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/21Y9d4mkEPsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Apparently redundant subqueryloads with single table inheritance

2012-06-05 Thread Kent
I am subquery loading some related properties for a polymorphic inheritance 
use case similar to the script attached.  SQLA seems to be issuing several 
extra queries that I didn't expect and don't think it needs.  In the 
attached example, I expected a total of 4 queries issued for the 
session.query().get(), but I there are 11 queries instead, most of them 
redundant.

Any ideas?

Thanks,
Kent

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/vC69eQMhv10J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

# I expected 4 queries total issued for the get() query below, but I get 11 instead.

from sqlalchemy import *
from sqlalchemy.orm import *

engine = create_engine('sqlite:///')
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

class Employee(object):
pass

class Manager(Employee):
pass

class Engineer(Employee):
pass

class SupportTech(Employee):
pass

class Role(object):
pass

employees_table = Table('employees', metadata,
Column('employee_id', Integer, primary_key=True),
Column('type', String(1), nullable=False),
Column('data', String(50)),
Column('manager_id', Integer, ForeignKey('employees.employee_id')),
)

roles_table = Table('roles', metadata,
Column('employee_id', Integer, ForeignKey('employees.employee_id'), primary_key=True),
Column('role', String(50), primary_key=True),
)

mapper(Role, roles_table)
employee_mapper = mapper(Employee, employees_table,
polymorphic_on=employees_table.c.type,
polymorphic_identity='E',
properties = {
		'roles': relationship(Role),
'staff': relationship(Employee,
cascade='save-update,merge,refresh-expire,delete,delete-orphan', 
single_parent=True,
backref=backref('manager', remote_side=[employees_table.c.employee_id])),
}
)

manager_mapper = mapper(Manager, inherits=employee_mapper,
polymorphic_identity='M')
engineer_mapper = mapper(Engineer, inherits=employee_mapper,
polymorphic_identity='E')

supporttech_mapper = mapper(SupportTech, inherits=employee_mapper,
polymorphic_identity='S')

session = Session()

metadata.create_all()

try:
m=Manager()
m.employee_id = 1
session.add(m)
session.flush()

e=Engineer()
e.employee_id = 2
e.manager_id = 1
session.add(e)
session.flush()

s=SupportTech()
s.employee_id = 3
s.manager_id = 1
session.add(s)
session.flush()

session = Session()

engine.echo = 'debug'
e = session.query(Employee).options(
subqueryload(Employee.staff),
subqueryload(Employee.roles),
subqueryload(Employee.staff,Employee.roles)).get(1)

finally:
engine.echo = False
session.rollback()
metadata.drop_all()




Re: [sqlalchemy] Apparently redundant subqueryloads with single table inheritance

2012-06-05 Thread Kent Bower

Thank you!

On 6/5/2012 4:41 PM, Michael Bayer wrote:


On Jun 5, 2012, at 4:21 PM, Kent wrote:

I am subquery loading some related properties for a polymorphic 
inheritance use case similar to the script attached.  SQLA seems to 
be issuing several extra queries that I didn't expect and don't think 
it needs.  In the attached example, I expected a total of 4 queries 
issued for the session.query().get(), but I there are 11 queries 
instead, most of them redundant.


this is ticket 2480 and has been fixed for several weeks.

0.7.8 is due as we have a memory leak issue but i wont have time for a 
release until perhaps late weekend.









Any ideas?

Thanks,
Kent

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/vC69eQMhv10J.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

subquery_polymorphic.py


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] overriding inherited mapper properties supported?

2012-05-17 Thread Kent
Suppose I am using single table polymorphic inheritance like the docs 
Employee/Manager/Engineer example.  I have a relationship that I want to 
have a different cascade or loading strategy for, depending on the mapper.  
Can I inherit from the base mapper and override the property, like this:

employee_mapper = mapper(Employee, employees_table, \
polymorphic_on=employees_table.c.type, polymorphic_identity='employee',
properties={'customer':relationship(Customer, lazy=False, 
cascade='save-update,refresh-expire,merge')})
manager_mapper = mapper(Manager, inherits=employee_mapper,
polymorphic_identity='manager',
properties={'customer':relationship(Customer, lazy=True, 
cascade='save-update,refresh-expire')})
engineer_mapper = mapper(Engineer, inherits=employee_mapper,
polymorphic_identity='engineer',
properties={'customer':relationship(Customer, lazy=True, 
cascade='save-update,refresh-expire')})


Here, the 'customer' relationship only joined loads and merges for 
Employee, not Engineer nor Manager.  Is this supported?  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/Oz-YnA_dInwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sqlalchemy supports INSERT INTO ... (SELECT .. ) ?

2012-03-30 Thread Kent Bower
Thanks for pointing me there.  As an aside, the recipe would be more 
bulletproof if it specified the columns (order).  Currently, it assumes 
sqlalchemy knows the order of the columns in the database, which it may not.


Thanks again!

On 3/30/2012 6:40 PM, Michael Bayer wrote:
its not built in at the moment but there is a recipe in the docs for 
@compiles


http://docs.sqlalchemy.org/en/latest/core/compiler.html#compiling-sub-elements-of-a-custom-expression-construct

kind of amazing nobody's yet contributed a patch for this, I show 
people that recipe for a few years now :)



On Mar 30, 2012, at 6:05 PM, Kent wrote:


Couldn't find answer in docs, does sqlalchemy support:

INSERT INTO ... (SELECT .. )

instead of

INSERT INTO ... VALUES...

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/IIxzN-i9740J.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sqlalchemy supports INSERT INTO ... (SELECT .. ) ?

2012-03-30 Thread Kent Bower

Yeah, I knew it was a hint... ;)   So many great ideas, so little time. :(

On 3/30/2012 6:58 PM, Michael Bayer wrote:

so many great ideas for the eventual contributor ! ;)

ideally it would be a method on insert() itself, 
table.insert().from(select([...])..., cols=('x', 'y', 'z')) or 
something like that.   Maybe people have suggestions.




On Mar 30, 2012, at 6:43 PM, Kent Bower wrote:

Thanks for pointing me there.  As an aside, the recipe would be more 
bulletproof if it specified the columns (order).  Currently, it 
assumes sqlalchemy knows the order of the columns in the database, 
which it may not.


Thanks again!

On 3/30/2012 6:40 PM, Michael Bayer wrote:
its not built in at the moment but there is a recipe in the docs for 
@compiles


http://docs.sqlalchemy.org/en/latest/core/compiler.html#compiling-sub-elements-of-a-custom-expression-construct

kind of amazing nobody's yet contributed a patch for this, I show 
people that recipe for a few years now :)



On Mar 30, 2012, at 6:05 PM, Kent wrote:


Couldn't find answer in docs, does sqlalchemy support:

INSERT INTO ... (SELECT .. )

instead of

INSERT INTO ... VALUES...

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/IIxzN-i9740J.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Single table inheritance

2012-03-21 Thread Kent
Hoping for advice:  I'm using sqlalchemy against a legacy application's 
database design, most of which isn't in my control.  I have a situation 
where single table inheritance should work beautifully but there is one 
catch: of the 7 polymorphic sub classes, there is one which is allowed to 
change into another.  The rest are immutable.  As an example, suppose a 
Employee were allowed to be promoted to Manager.  

The docs state that the polymorphic_identity is a read only attribute and 
that Behavior is undefined if directly modified.  

I could work around this by mapping 2 polymorphic_identities to one class, 
but as far a I can see, this can only be a single scalar value.  The docs 
say that polymorphic_on may also be of other types besides Column in a 
future SQLAlchemy release... I wonder if I can help sqla map either of 2 
values to a class in this way?

Any advice?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/vU1nLi3v8sEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Single table inheritance

2012-03-21 Thread Kent Bower

That will work for me, thanks!

P.S. make a note that the doc statement that it will be a future release 
should be updated.


On 3/21/2012 10:04 AM, Michael Bayer wrote:

also polymorphic_on can be any SQL expression in 0.7, like a CASE statement if 
you wanted.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Half merge/save cascade support for M:N relationships

2012-02-29 Thread Kent
I suspect this doesn't interest you so much, and no offense taken if not, 
but have you ever considered supporting the idea of a half 
merge/save-update cascade for many to many relationships?  
The use case is where I want to merge/save-update to the secondary table 
only (collection status), but I don't want to merge/save changes to the 
actual related objects.  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/lvDys29gJncJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Half merge/save cascade support for M:N relationships

2012-02-29 Thread Kent Bower
I hadn't really thought the save-update through, my real interest was 
regarding merge.  For save-update it may not make as much sense.



On 2/29/2012 10:55 AM, Michael Bayer wrote:
I get what that would do for merge and might not be a big deal, what 
would it do for save-update?


Sent from my iPhone

On Feb 29, 2012, at 8:55 AM, Kent jkentbo...@gmail.com 
mailto:jkentbo...@gmail.com wrote:


I suspect this doesn't interest you so much, and no offense taken if 
not, but have you ever considered supporting the idea of a half 
merge/save-update cascade for many to many relationships?
The use case is where I want to merge/save-update to the secondary 
table only (collection status), but I don't want to merge/save 
changes to the actual related objects.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/lvDys29gJncJ.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.

To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] most straightforward way to revert some column changes

2012-02-28 Thread Kent
Is there a simple way to revert all columns back to their committed
state (some columns may be synonyms), or do I need to loop through
mapper.iterate_properties, get the ColumnProperty ones, make sure they
aren't aliases (prop.columns[0] is Column) and use setattr() to set
the value back to the attributes.history's deleted?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: most straightforward way to revert some column changes

2012-02-28 Thread Kent
On Feb 28, 5:39 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 oh also you might want to use attributes.set_committed_state instead of 
 setattr() so that the history is cleared.


1) What do you mean? setattr() also clears the history if you set it
back to what it used to be... right?

2) yes, I meant not going back to database, so given I need to
manually reset them, is there a more elegant method than:

for col in object_mapper(self).iterate_properties:
if type(col) is ColumnProperty and \
   type(col.columns[0]) is Column:
...
# find and set orig_value
...
setattr(self, col.key, orig_value)

the mechanism I'm using to make sure I have a real column (not alias,
not RelationshipProperty) seem convoluted.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: session.query().get() is unsupported during flush for getting an object that was just added?

2012-02-06 Thread Kent
On Feb 1, 3:17 pm, Kent jkentbo...@gmail.com wrote:
  If the value is based on what's already been INSERTed for previous rows, 
  I'd emit a SQL statement to get at the value.If it's based on some kind 
  of natural consideration that isn't dependent on the outcome of an INSERT 
  statement, then you can do the looping above within the before_flush() 
  event and assign everything at once.Basically you need to batch the 
  same way the UOW itself does.

 is IdentitySet an OrderedSet?  if I loop through session.new, for
 example, am I guaranteed to hit these in the same order they were
 added to the session?

Is there a way to work out the order in which session.new items were
added... I take it session.new is not ordered.
Did you mention you were in the works of an instance level event for
'before_flush()', similar to 'before_insert/before_update'?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: session.query().get() is unsupported during flush for getting an object that was just added?

2012-02-01 Thread Kent
 If the value is based on what's already been INSERTed for previous rows, I'd 
 emit a SQL statement to get at the value.If it's based on some kind of 
 natural consideration that isn't dependent on the outcome of an INSERT 
 statement, then you can do the looping above within the before_flush() event 
 and assign everything at once.Basically you need to batch the same way 
 the UOW itself does.

is IdentitySet an OrderedSet?  if I loop through session.new, for
example, am I guaranteed to hit these in the same order they were
added to the session?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] expunge cascade behavior change

2012-01-31 Thread Kent Bower
Somewhere between 0.6.4 and 0.7.5, the expunge cascade behavior 
changed.  Can you help me understand what changed/point me to the 
ticket?  The attached script assertions succeed in 0.6.4 but the last 
one fails in 0.7.5.  It doesn't seem wrong, but I'm wondering what the 
behavior was defined as previously and if that was considered a bug, etc.


Thanks,
Kent

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.orm.util import has_identity

engine = create_engine('sqlite:///', echo=True)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

rocks_table = Table(rocks, metadata,
Column(id, Integer, primary_key=True),
)

bugs_table = Table(bugs, metadata,
Column(id, Integer, primary_key=True),
Column(rockid, Integer, ForeignKey('rocks.id'),),
)

class Rock(object):
def __repr__(self):
return 'Rock@%d: id=[%s] in session:[%s] has_identity[%s]' % (id(self), self.__dict__.get('id'), self in session, has_identity(self))

class Bug(object):
def __repr__(self):
return 'Bug@%d: id=[%s] rockid[%s] with rock[%s]' % (id(self), self.__dict__.get('id'), self.__dict__.get('rockid'), self.__dict__.get('rock','not set'))


mapper(Rock, rocks_table,
properties={'bugs': relationship(Bug,
cascade='all,delete-orphan', 
backref=backref('rock',cascade='refresh-expire,expunge'))
})

mapper(Bug, bugs_table)


metadata.create_all()

session = Session()


# add a rock and bug
rock=Rock()
rock.id = 0
bug=Bug()
bug.id = 0
rock.bugs.append(bug)
session.add(rock)

session.commit()

# later... new session
session = Session()
rock = session.query(Rock).get(0)
rock.bugs.append(Bug())

assert rock in session

rock.bugs = []

assert rock in session


[sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?

2012-01-26 Thread Kent Bower
I think I understand why, during a flush(), if I use 
session.query().get() for an item that was just added during this flush, 
I don't get the persistent object I might expect because the session 
still has it as pending even though, logically, it is already persistent.


I don't suppose you have any desire to support that, huh?  The use case 
would be related to the future ticket 
http://www.sqlalchemy.org/trac/ticket/1939 (and 
http://www.sqlalchemy.org/trac/ticket/2350).


Attached is a script demonstrating the issue I've hit.  I can work 
around it with some difficulty, but I wanted your input and thoughts.


Thanks,
Kent

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

# Show what happens when we use session.query().get()
# during a flush to load an object that is being inserted during the same flush
# Instead of getting what was the pending object, we get a new copy of what 
# the orm thinks is persistent and then it is detached after the flush finishes

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import event
from sqlalchemy.orm.util import has_identity

engine = create_engine('sqlite:///', echo=True)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

rocks_table = Table(rocks, metadata,
Column(id, Integer, primary_key=True),
)

bugs_table = Table(bugs, metadata,
Column(id, Integer, primary_key=True),
Column(rockid, Integer, ForeignKey('rocks.id'),),
)

class Rock(object):
def __repr__(self):
return 'Rock@%d: id=[%s] in session:[%s] has_identity[%s]' % (id(self), self.__dict__.get('id'), self in session, has_identity(self))

class Bug(object):
def __repr__(self):
return 'Bug@%d: id=[%s] rockid[%s] with rock[%s]' % (id(self), self.__dict__.get('id'), self.__dict__.get('rockid'), self.__dict__.get('rock','not set'))


class BugAgent(MapperExtension):
def before_update(self, mapper, connection, instance):
assert 'rock' not in instance.__dict__
print \n\n during flush
# after http://www.sqlalchemy.org/trac/ticket/2350, we could just reference like this:
#instance.rock
instance.rock = session.query(Rock).get(instance.rockid)
#
# we just loaded a Rock that was just inserted during this flush, so
# it looks persistent to the orm, but the orm also has this object
# already (still pending).  After the flush is done,
# the pending object will be the only one in the session and the 
# object we just loaded here will be removed from the session (detached)
# 
print \n\n*before_update: %r\n % instance
assert 'rock' in instance.__dict__


mapper(Rock, rocks_table,
properties={'bugs': relationship(Bug,
cascade='all,delete-orphan', 
backref='rock')
})

mapper(Bug, bugs_table, extension=BugAgent())

@event.listens_for(Bug.rockid, 'set')
def autoexpire_rock_attribute(instance, value, oldvalue, initiator):
if value != oldvalue:
if instance in session and has_identity(instance):
assert 'rock' in instance.__dict__
print \n\nBug.rockid changing from [%s] to [%s]... % (oldvalue, value)
print **about to expire rock for %r % instance
session.expire(instance, ['rock'])
print **expired rock for %r\n % instance
assert 'rock' not in instance.__dict__


metadata.create_all()

session = Session()


# add a rock and bug
rock=Rock()
rock.id = 0
bug=Bug()
bug.id = 0
rock.bugs.append(bug)
session.add(rock)

session.commit()

# later... new session
session = Session()


b1 = Bug()
b1.id = 0

rock=Rock()
rock.id = 1
rock.bugs.append(b1)

print \n\nmerge start\n
merged = session.merge(rock)
print \n\nmerge end\n
print flush\n
session.flush()

assert 'rock' in merged.bugs[0].__dict__

# show that the pending object has become persistent
print \n\nsession's pending obj turned persistent: %r % session.query(Rock).get(1)

# show that the object we loaded has been detached from the session
print 'merged.bugs[0].rock (copy of same object, no longer in session): %r' % merged.bugs[0].rock



Re: [sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?

2012-01-26 Thread Kent
Fair enough.  I had enough understanding of what must be going on to 
know flush isn't straightforward, but I'm still glad I asked.  Sorry for 
having not read the documents very well and thanks for your answer, 
because from it, I surmise that before_flush() *is* safe for session 
operations, which is very good to understand more clearly.


Thanks.

On 1/26/2012 12:06 PM, Michael Bayer wrote:

On Jan 26, 2012, at 11:28 AM, Kent Bower wrote:


I think I understand why, during a flush(), if I use session.query().get() for 
an item that was just added during this flush, I don't get the persistent 
object I might expect because the session still has it as pending even though, 
logically, it is already persistent.

I don't suppose you have any desire to support that, huh?  The use case would 
be related to the future ticket http://www.sqlalchemy.org/trac/ticket/1939 (and 
http://www.sqlalchemy.org/trac/ticket/2350).

Attached is a script demonstrating the issue I've hit.  I can work around it 
with some difficulty, but I wanted your input and thoughts.

No, there's no plans to support this case at all; you're using the Session 
inside of a mapper event, which is just not supported, and can never be due to 
the nature of the unit of work.   The most recent docstrings try to be very 
explicit about this:

http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.MapperEvents.before_update

I guess I have to add session.query() and get() in there as well.

The way the flush works is not as straightforward as persist object A; persist object B; 
persist object C - that is, these are not atomic operations inside the flush.It's more 
like, Perform step X for objects A, B, and C; perform step Y for objects A, B and C.   
This is basically batching, and is necessary since it is vastly more efficient than atomically 
completing each object one at a time.   Also, some decisions are needed by Y which can't always be 
made until X has completed for objects involved in dependencies.

A side effect of batching is that if we provide a hook that emits after X and 
before Y, you're being exposed to the objects in an unusual state.   Hence, the 
hooks that are in the middle like that are only intended to emit SQL on the 
given Connection; not to do anything ORM level beyond assigning column-based 
values on the immediate object.As always, before_flush() is where ORM-level 
manipulations are intended to be placed.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] session.query().get() is unsupported during flush for getting an object that was just added?

2012-01-26 Thread Kent
So, as a typical example of where it seems very natural to use 
before_update, suppose you need to automatically update the not null 
sequence of a related table. This but to get the sequence you need to 
loop over the parent table's collection.


You want the sequence to be human friendly (natural primary key) and 
also you want to be able to sort by sequence guaranteed in order without 
the possibility of a database sequence wrap around.  So you want the 
sequence 1,2,3...


This seems extremely well fit for before_insert, like this:
==
parents_table = Table(parents, metadata,
Column(id, Integer, primary_key=True),
)

children_table = Table(children, metadata,
Column(parentid, Integer, ForeignKey('parents.id'),),
Column(sequence, Integer, primary_key=True),
)

class Parent(object):
pass

class Child(object):
pass

mapper(Parent, parents_table,
properties={'children': relationship(Child,
cascade='all,delete-orphan',
backref='parent')
})

mapper(Child, children_table)

@event.listens_for(Child, 'before_insert')
def set_sequence(mapper, connection, instance):
if instance.sequence is None:
instance.sequence = (max(c.sequence for c in 
instance.parent.children) or 0) + 1

==

But this reaches across relationships, so that is actually not desired 
here, is that correct?


For this, you would loop over session.new in before_update, is that how 
you would approach this requirement?




On 1/26/2012 12:34 PM, Michael Bayer wrote:

yup, before_flush is made for that, and I've for some time had some vague plans 
to add some more helpers there so you could get events local to certain kinds 
of objects in certain kinds of states, meaning it would look a lot like 
before_update.   But looping through .new, .dirty, and .deleted is how to do it 
for now.



On Jan 26, 2012, at 12:12 PM, Kent wrote:


Fair enough.  I had enough understanding of what must be going on to know flush 
isn't straightforward, but I'm still glad I asked.  Sorry for having not read 
the documents very well and thanks for your answer, because from it, I surmise 
that before_flush() *is* safe for session operations, which is very good to 
understand more clearly.

Thanks.

On 1/26/2012 12:06 PM, Michael Bayer wrote:

On Jan 26, 2012, at 11:28 AM, Kent Bower wrote:


I think I understand why, during a flush(), if I use session.query().get() for 
an item that was just added during this flush, I don't get the persistent 
object I might expect because the session still has it as pending even though, 
logically, it is already persistent.

I don't suppose you have any desire to support that, huh?  The use case would 
be related to the future ticket http://www.sqlalchemy.org/trac/ticket/1939 (and 
http://www.sqlalchemy.org/trac/ticket/2350).

Attached is a script demonstrating the issue I've hit.  I can work around it 
with some difficulty, but I wanted your input and thoughts.

No, there's no plans to support this case at all; you're using the Session 
inside of a mapper event, which is just not supported, and can never be due to 
the nature of the unit of work.   The most recent docstrings try to be very 
explicit about this:

http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.MapperEvents.before_update

I guess I have to add session.query() and get() in there as well.

The way the flush works is not as straightforward as persist object A; persist object B; 
persist object C - that is, these are not atomic operations inside the flush.It's more 
like, Perform step X for objects A, B, and C; perform step Y for objects A, B and C.   
This is basically batching, and is necessary since it is vastly more efficient than atomically 
completing each object one at a time.   Also, some decisions are needed by Y which can't always be 
made until X has completed for objects involved in dependencies.

A side effect of batching is that if we provide a hook that emits after X and 
before Y, you're being exposed to the objects in an unusual state.   Hence, the 
hooks that are in the middle like that are only intended to emit SQL on the 
given Connection; not to do anything ORM level beyond assigning column-based 
values on the immediate object.As always, before_flush() is where ORM-level 
manipulations are intended to be placed.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr

[sqlalchemy] backrefs

2012-01-26 Thread Kent
Is there a straightforward way to determine if a RelationshipProperty
has a corresponding reverse (backref)?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: backrefs

2012-01-26 Thread Kent
I assume the non public property._reverse_property is just what I'm
looking for.  :)

On Jan 26, 2:06 pm, Kent jkentbo...@gmail.com wrote:
 Is there a straightforward way to determine if a RelationshipProperty
 has a corresponding reverse (backref)?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] try: except KeyError:

2012-01-25 Thread Kent
At some point you changed the InstrumentedAttribute.get() method from

try:
  return dict_[self.key]
except KeyError:
 ...


To this:


432  - def get(self, state, dict_, passive=PASSIVE_OFF):
433 Retrieve a value from the given object.
434
435 If a callable is assembled on this object's attribute,
and
436 passive is False, the callable will be executed and
the
437 resulting value will be set as the new value for this
attribute.
(Pdb)
438 
439 if self.key in dict_:
440 return dict_[self.key]
441 else:
442 # if history present, don't load
443 key = self.key
444 if key not in state.committed_state or \
445 state.committed_state[key] is NEVER_SET:


I'm extremely interested if this was related to performance or just
style?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] c extensions built?

2012-01-12 Thread Kent
What is the easiest way to confirm that my installation has compiled/
is using the c extensions?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] c extensions built?

2012-01-12 Thread Kent Bower

Yeah, just the two cresultproxy.so and cprocessors.so, right?

On 1/12/2012 3:34 PM, Michael Bayer wrote:

when you do the setup.py the log messages say so.

Otherwise you'd look where sqlalchemy was installed and check if you see .so 
files.


On Jan 12, 2012, at 12:35 PM, Kent wrote:


What is the easiest way to confirm that my installation has compiled/
is using the c extensions?

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-10 Thread Kent Bower

funny story, here's where it was added:  
http://www.sqlalchemy.org/trac/ticket/1910 which is essentially your ticket !   
:)

I just double checked and I had patched in rfde41d0e9f70 
http://www.sqlalchemy.org/trac/changeset/fde41d0e9f70/.  Is there 
another commit that went against 1910?  For example, was there logic in 
the attachment /load_on_fks.patch/ 
http://www.sqlalchemy.org/trac/attachment/ticket/1910/load_on_fks.patch that 
was committed?


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: 0.7 event migration

2012-01-10 Thread Kent
See http://www.sqlalchemy.org/trac/ticket/2372

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: 0.7 event migration

2012-01-10 Thread Kent
Mike,

Old code:
==
def visit_bindparam(bindparam):
if bindparam.key in bind_to_col:
bindparam.value = lambda:
mapper._get_state_attr_by_column(
state, dict_,
bind_to_col[bindparam.key])
==
New code (note that 'value' is now 'callable'):
def visit_bindparam(bindparam):
if bindparam._identifying_key in bind_to_col:
bindparam.callable = \
lambda: mapper._get_state_attr_by_column(
state, dict_,
 
bind_to_col[bindparam._identifying_key])
==

Now look at sql.util.py:
==
def bind_values(clause):
v = []
def visit_bindparam(bind):
value = bind.value

# evaluate callables
if callable(value):
value = value()

v.append(value)

visitors.traverse(clause, {}, {'bindparam':visit_bindparam})
return v
==

Aren't we missing this: ?

  if bind.callable:
value = bind.callable()

I think this is why it isn't loading the way it used to.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-10 Thread Kent Bower

Thank you very much!

On 1/10/2012 11:47 AM, Michael Bayer wrote:

Code wasn't covered and is a regresssion, fixed in rd6e321dc120d.


On Jan 10, 2012, at 10:58 AM, Kent wrote:


Mike,

Old code:
==
def visit_bindparam(bindparam):
if bindparam.key in bind_to_col:
bindparam.value = lambda:
mapper._get_state_attr_by_column(
state, dict_,
bind_to_col[bindparam.key])
==
New code (note that 'value' is now 'callable'):
def visit_bindparam(bindparam):
if bindparam._identifying_key in bind_to_col:
bindparam.callable = \
lambda: mapper._get_state_attr_by_column(
state, dict_,

bind_to_col[bindparam._identifying_key])
==

Now look at sql.util.py:
==
def bind_values(clause):
v = []
def visit_bindparam(bind):
value = bind.value

# evaluate callables
if callable(value):
value = value()

v.append(value)

visitors.traverse(clause, {}, {'bindparam':visit_bindparam})
return v
==

Aren't we missing this: ?

  if bind.callable:
value = bind.callable()

I think this is why it isn't loading the way it used to.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] More 0.7 migration pains

2012-01-10 Thread Kent
After 0.7.5 migration, I'm sometimes hitting an issue from within
merge().

unitofwork.py

def track_cascade_events(descriptor, prop):
...
def set_(state, newvalue, oldvalue, initiator):
# process save_update cascade rules for when an instance
# is attached to another instance
if oldvalue is newvalue:
return newvalue

sess = session._state_session(state)
if sess:
prop = state.manager.mapper._props[key]
if newvalue is not None:
newvalue_state = attributes.instance_state(newvalue)
if prop.cascade.save_update and \
(prop.cascade_backrefs or key == initiator.key)
and \
not sess._contains_state(newvalue_state):
sess._save_or_update_state(newvalue_state)

if oldvalue is not None and prop.cascade.delete_orphan:
oldvalue_state = attributes.instance_state(oldvalue)
# =

if oldvalue_state in sess._new and \
prop.mapper._is_orphan(oldvalue_state):
sess.expunge(oldvalue)
return newvalue


I'm hitting here with an oldvalue of attributes.PASSIVE_NO_RESULT,
which naturally has no instance_state() !

So the first question is: is this due to one of my transient loader
hacks or can you think of a path through merge() that you might reach
this set event with an oldvalue of attributes.PASSIVE_NO_RESULT?

Thanks again, of course!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Kent
 i guess the patch is interacting with that load_on_pending stuff, which I 
 probably added for you also.  It would be nice to really work up a new 
 SQLAlchemy feature: detached/transientobject loading document that really 
 describes what it is we're trying to do here.If you were to write such a 
 document, what example would you give as the rationale ?I know that's the 
 hard part here, but this is often very valuable, to look at your internal 
 system and genericize it into something universally desirable.

As far as such a document, would you want a trac ticket opened with my
use case in a generalized form where others may likely have the same
use case?

Hoping to not upset you here.:

My AttributeImpl.callable_ hack to set a transient state's
session_id, load the relationship, and then set it back to None works
for m2o but when it needs to load a collection (or it can't use get()
I presume), then I am hitting this return None:

class LazyLoader(AbstractRelationshipLoader):
def _load_for_state(self, state, passive):
...
...
lazy_clause = strategy.lazy_clause(state)

if pending:
bind_values = sql_util.bind_values(lazy_clause)
if None in bind_values:
return None###  

q = q.filter(lazy_clause)

in sqla 6.4,
bind_values = sql_util.bind_values(lazy_clause) would return the value
of the foreign key from the transient object

in sqla 7.5,
it returns [None], presumably because the committed values are not
set?

Short term, do you know right off what changed or what I could do to
work around this?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Kent Bower

On 1/9/2012 2:33 PM, Michael Bayer wrote:

On Jan 9, 2012, at 2:30 PM, Kent wrote:


i guess the patch is interacting with that load_on_pending stuff, which I probably 
added for you also.  It would be nice to really work up a new SQLAlchemy feature: 
detached/transientobject loading document that really describes what it is we're trying to do 
here.If you were to write such a document, what example would you give as the rationale ?I 
know that's the hard part here, but this is often very valuable, to look at your internal system 
and genericize it into something universally desirable.

As far as such a document, would you want a trac ticket opened with my
use case in a generalized form where others may likely have the same
use case?

Hoping to not upset you here.:

My AttributeImpl.callable_ hack to set a transient state's
session_id, load the relationship, and then set it back to None works
for m2o but when it needs to load a collection (or it can't use get()
I presume), then I am hitting this return None:

class LazyLoader(AbstractRelationshipLoader):
def _load_for_state(self, state, passive):
...
...
lazy_clause = strategy.lazy_clause(state)

if pending:
bind_values = sql_util.bind_values(lazy_clause)
if None in bind_values:
return None###

q = q.filter(lazy_clause)

that means some of the columns being linked to the foreign keys on the target are None.  
If you want your lazyload to work all the attributes need to be populated.   If you're 
hitting the get committed  thing, and the attributes are only pending, then 
that's what that is.


But this changed from 0.6.4?

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2012-01-09 Thread Kent Bower

On 1/9/2012 5:33 PM, Michael Bayer wrote:

On Jan 9, 2012, at 2:36 PM, Kent Bower wrote:


that means some of the columns being linked to the foreign keys on the target are None.  
If you want your lazyload to work all the attributes need to be populated.   If you're 
hitting the get committed  thing, and the attributes are only pending, then 
that's what that is.


But this changed from 0.6.4?

funny story, here's where it was added:  
http://www.sqlalchemy.org/trac/ticket/1910 which is essentially your ticket !   
:)

Except that my patched version of 0.6.4 (which I was referring to) 
already has that change from that ticket patched in.  It must be 
something else, I'm still looking...


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] 'TypeError: expecting numeric data' is only raised for do_executemany

2012-01-03 Thread Kent
The statements that are executed as a single statement make no such
check (and the database engine correctly translates a string to
integer), but cursor.executemany checks type:

lib/sqlalchemy/engine/default.py, line 327, in do_executemany
cursor.executemany(statement, parameters)
TypeError: expecting numeric data

You know that inconsistency?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: 'TypeError: expecting numeric data' is only raised for do_executemany

2012-01-03 Thread Kent
Oh.  Makes sense.  Then the only reason I'm starting to hit this is
that you've optimized the orm to use executemany() more often,
correct?

On Jan 3, 3:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jan 3, 2012, at 1:58 PM, Kent wrote:

  The statements that are executed as a single statement make no such
  check (and the database engine correctly translates a string to
  integer), but cursor.executemany checks type:

  lib/sqlalchemy/engine/default.py, line 327, in do_executemany
     cursor.executemany(statement, parameters)
  TypeError: expecting numeric data

  You know that inconsistency?

 cursor is the DBAPI cursor, so any inconsistencies there are on the DBAPI 
 side.    I don't know what the numeric data in question would be here.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: 0.7 event migration

2011-12-28 Thread Kent

On 12/27/2011 5:34 PM, Michael Bayer wrote:


On Dec 27, 2011, at 5:21 PM, Kent wrote:

So see what happens if you, for the moment, just monkeypatch over 
orm.session._state_session to do a lookup in a global context if 
state.session_id isn't set.  If that solves the problem of I want 
detached objects to load stuff, for you and everyone else who wants 
this feature, then whatever - I'm not at all thrilled about this use 
case but if it's just one trivial hook that I don't need to 
encourage, then there you go.




Please give me a lesson in monkeypatching 101.  This isn't being
invoked:

=
import sqlalchemy.orm.session as session_module
sqla_state_session = session_module._state_session
def _state_session(state):
   
   for transient/detached objects, so we can automatically query
these related objects
   
   return sqla_state_session(state) or DBSession()
setattr(session_module, '_state_session', _state_session)
=

I presume there are already references to _state_session before I
change it.


Hm I would have guessed, but mapper.py and strategies.py seem to be 
calling it relative to the module.   What does pdb tell you if you 
post mortem into where it raises the Detached error ?




Also, will this cause other side effects, such as obj in DBSession
reporting True when it used to report False or the orm internals being
confused by the return of this value?


maybe ?  if it doesn't just work then yes.  The new logic here only 
needs to take place at the point at which it loads an attribute so if 
we made it local to those areas, there shouldn't be any big issues.


I'm basically making you a developer here to help me test this out.




ok, it was never making it that far because my main use case for this is 
to work with transient instances (detached are secondary).  I was 
testing with a transient object and it never got past the first if 
statement in _load_for_state(), which was using session_id instead of 
the return value from _state_session().


So we'd have to move the state_session API invocation up (patch 
attached).  Then this works as a proof of concept for both detached and 
transient objects.


I'm not sure if it was important for the return attributes.ATTR_EMPTY 
to have precedence over return attributes.PASSIVE_NO_RESULT so I kept 
them in that order to not disturb anything (otherwise we could delay the 
session lookup until after the return attributes.PASSIVE_NO_RESULT 
test).  Those cases shouldn't be common anyway, right?


We'd also need to make this very local to the loading of attributes, as 
you mentioned, because object_session() also invokes _state_session()... 
we can't have that.. it messes up too much.  (For example, transient 
objects appear pending and detached objects appear persistent, depending 
on your method of inspection.)



Off topic, but from a shell prompt I sometimes find myself naturally 
attempting this:

session.detach(instance)

and then when that fails, I remember:
session.expunge(instance)

I'm not asking for a change here, but quite curious: you think 'detach' 
is a better/more natural term?


=

diff -U10 -r sqlalchemy-default/lib/sqlalchemy/orm/strategies.py 
sqlalchemy-default.kb/lib/sqlalchemy/orm/strategies.py
--- sqlalchemy-default/lib/sqlalchemy/orm/strategies.py 2011-12-15 
11:42:50.0 -0500
+++ sqlalchemy-default.kb/lib/sqlalchemy/orm/strategies.py  
2011-12-27 17:48:54.0 -0500

@@ -450,41 +450,42 @@
 self._rev_bind_to_col, \
 self._rev_equated_columns

 criterion = sql_util.adapt_criterion_to_null(criterion, 
bind_to_col)


 if adapt_source:
 criterion = adapt_source(criterion)
 return criterion

 def _load_for_state(self, state, passive):
-if not state.key and \
-(not self.parent_property.load_on_pending or not 
state.session_id):

+prop = self.parent_property
+pending = not state.key
+session = sessionlib._state_session(state)
+
+if pending and \
+(not prop.load_on_pending or not session):
 return attributes.ATTR_EMPTY

 instance_mapper = state.manager.mapper
-prop = self.parent_property
 key = self.key
 prop_mapper = self.mapper
-pending = not state.key

 if (
 (passive is attributes.PASSIVE_NO_FETCH or \
 passive is attributes.PASSIVE_NO_FETCH_RELATED) and
 not self.use_get
 ) or (
 passive is attributes.PASSIVE_ONLY_PERSISTENT and
 pending
 ):
 return attributes.PASSIVE_NO_RESULT

-session = sessionlib._state_session(state)
 if not session:
 raise orm_exc.DetachedInstanceError(
 Parent instance %s is not bound to a Session; 
 lazy

[sqlalchemy] InvalidRequestError: get() can only be used against a single mapped class.

2011-12-28 Thread Kent

Was it your intention to no longer allow this type of query().get()?

session.query(cls.orderid).get(orderid)

I get InvalidRequestError: get() can only be used against a single 
mapped class. but the wording is such that I'm not sure you intended to 
limit that use case (there is only a single mapped class in that query).


I'll change such queries, just wanted to bring it up to see if you 
intended it that way.


(Admittedly, if I recall correctly, when I first added it, I think I was 
slightly surprised it worked as I expected...)


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: InvalidRequestError: get() can only be used against a single mapped class.

2011-12-28 Thread Kent


On Dec 28, 12:07 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Dec 28, 2011, at 11:34 AM, Kent wrote:

  Was it your intention to no longer allow this type of query().get()?

  session.query(cls.orderid).get(orderid)

 it was !   yes.



  I get InvalidRequestError: get() can only be used against a single mapped 
  class. but the wording is such that I'm not sure you intended to limit 
  that use case (there is only a single mapped class in that query).

 From the POV of that message there's just a Column, and no mapped class, 
 actually being passed.  I knew this wouldn't necessarily be very clear but 
 there was no better message I could think of.



  I'll change such queries, just wanted to bring it up to see if you intended 
  it that way.

  (Admittedly, if I recall correctly, when I first added it, I think I was 
  slightly surprised it worked as I expected...)

 amazingly it took me just one second to find the originating issue, which was 
 that the identity map was still being searched, thereby causing the wrong 
 result:

 http://www.sqlalchemy.org/trac/ticket/2144

I suspected that subsequent .get() invocations would return just the
column instead of the object, is that what was happening?


 When something is used in a way I didn't anticipate, my first instinct is 
 often to first make that unanticipated case raise an error.   That means it's 
 covered and people won't do it.   Later, if we decide it should do something, 
 that can be re-introduced.   It's always easier to unlock a dead end with a 
 well thought out approach, than to make multiple, small modifications to an 
 existing behavior.

That is the right approach, I'm convinced.  I always feel I'm fighting
an eternal battle at work against the other guys' mindset of make the
program not crash if something unexpected happens, so at least there
is a chance it will keep running...  Can't stand it.  Things like
try: catch Exception: pass.  !  It's far better to blow up before
doing damage, even if that damage is only speculative because it
wasn't used in the designed use case.  Later, when that case comes up
you can analyze it and open the dead end, agreed.

Thanks, I've certainly got no issues with the semantics of get()
meaning get the instance.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: InvalidRequestError: get() can only be used against a single mapped class.

2011-12-28 Thread Kent
in fact, I modified our Query class after .first() was being abused
out of laziness:

def first(self):
raise ProgrammingError(Never use .first(); please use .get()
or .one()\n
  .one() makes sure there is only one return and .get()
returns None if doesn't exist.\n
  .get() is probably better since it avoids database
roundtrip when the object is already loaded.)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: 0.7 event migration

2011-12-27 Thread Kent
On Dec 26, 5:12 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Dec 26, 2011, at 1:50 PM, Kent wrote:

  Yes, a nice simplification.
  I'm using it to lazyload attributes for objects that aren't in a
  session.  I'm not sure if you pointed me there, I think I found it
  myself, but you helped work out the later details...
  Our app lives inside a webserver framework that, very appropriately,
  in my opinion, only has one session for any given web request.  So,
  for our framework, I can safely lazyload related attributes for
  transient or detached objects by temporarily setting state.session_id:

 heh yes this is exactly what things used to do between like version 0.2 and 
 0.4 :) - if there was no session, it would look in the threadlocal context 
 which somehow was associated with things.     This was one of many implicit 
 decisions/surprises we took out in favor of a behavioral model that's 
 transparent.     For awhile there we also had this crazy session lookup 
 logic that was quasi-optional, nobody knew what it was, and I had similar 
 endless threads with a few interested folks like yourself trying to 
 explain/rationalize it, as well as try to decide how this could be an 
 optional hook.    The library only improved as I took out all kinds of 
 tricks like this which were leftovers from the original 0.1 model that 
 followed the magic everything, everywhere pattern.

 various things come to mind in terms of re-enabling this pattern.

 One is that the session registry would be extensible.  Technically it is 
 right now if you were to override orm.session._state_session().    Then the 
 lazyloaders would treat the parent object as though it were inside of 
 whatever session you had it returning here.    You could even hardwire it to 
 the contextual session itself - then even sharing objects among threads would 
 be selecting the current thread's session.   I could barely consider what 
 level of surprise side effects might/might not occur from this practice, but 
 it is how things used to work.

 If you want to play with it, I can re-add some kind of event for this.    I 
 have a name like detached session in my mind, maybe an event that's only 
 called if there's no session_id, maybe it's just a session registry you stick 
 onto a mapper, though I suppose the event hook is a good place here.

 The other is that the loader callable here is a lot like an event and we 
 could put a hook here that you'd just listen() for.   That wouldn't be very 
 difficult to do, though determining when it's invoked would be tricky.   When 
 the value is locally present in __dict__, I wouldn't want even one function 
 call of latency there checking for a possible event handler.  When the value 
 isn't present, would the event be before callable_() is invoked, or only if 
 callable_() is not present, etc.    Also i played around with how the 
 load_lazy function could be exposed such that the Session could be 
 supplied, but it's not very nice for a public API - some loader callables 
 populate the attribute, others return the function and the AttributeImpl does 
 the population.  Exposing this publicly would mean I can't adjust that 
 without the risk of breaking existing apps.

 So see what happens if you, for the moment, just monkeypatch over 
 orm.session._state_session to do a lookup in a global context if 
 state.session_id isn't set.  If that solves the problem of I want detached 
 objects to load stuff, for you and everyone else who wants this feature, 
 then whatever - I'm not at all thrilled about this use case but if it's just 
 one trivial hook that I don't need to encourage, then there you go.


Sounds good, I'll look into that.  But I'm curious why this approach
is preferable over the one I've currently got (callable_ seems more
public than _state_session).  I admit, it would free my code from the
passive logic, since I know you were considering changing those to bit
flags, but why is this callable_ approach bad?









  def configure_attribute(class_, attr, inst):
     
     Set up function to be invoked when relations are 'get'ed on
  possibly
     transient objects, so we can automatically query these related
  objects
     
     if isinstance(inst.property, RelationshipProperty):
         default_loader = inst.impl.callable_
         def load_value(state, passive):
             if state.session_id is not None:
                 # this is persistent or pending, so
                 # return default sqla functionality
                 return default_loader(state, passive)
             if passive is attributes.PASSIVE_NO_FETCH:
                 return attributes.PASSIVE_NO_RESULT
             # session_id is currently None
             state.session_id = DBSession().hash_key
             retval = default_loader(state, passive)
             state.session_id = None
             return retval
         inst.impl.callable_ = load_value

  ..
  ..
  event.listen(DBEntity, 'attribute_instrument

[sqlalchemy] SAWarning: Object of type Discount not in session, add operation along 'OrderDetailDiscount.discount' won't proceed

2011-12-27 Thread Kent
In http://www.sqlalchemy.org/trac/changeset/7025%3A0a6576abea5b#file1
you added a warning that I'm hitting now.

'when save-update cascade is disabled, or the target object is
otherwise not
present in the session, and collection/scalar changes have taken
place. A warning
is emitted describing the type of operation, the target reference, and
the relationship
description, stating that the operation will not take place.'

Do you think it is appropriate to skip the warning if the object isn't
in the session *and* the save-update cascade is disabled?

With *both* conditions present, it would seem the user has arranged
them as such intentionally (at least in my case).  Either that or they
messed up twice (I suppose that is the more conservative approach,
huh?)

Possibly consider a different warning for that case?  I like the idea
of raising this warning, but only if one or the other conditions
exist, but not both.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SAWarning: Object of type Discount not in session, add operation along 'OrderDetailDiscount.discount' won't proceed

2011-12-27 Thread Kent

On 12/27/2011 2:34 PM, Michael Bayer wrote:

On Dec 27, 2011, at 1:51 PM, Kent wrote:


In http://www.sqlalchemy.org/trac/changeset/7025%3A0a6576abea5b#file1
you added a warning that I'm hitting now.

'when save-update cascade is disabled, or the target object is
otherwise not
present in the session, and collection/scalar changes have taken
place. A warning
is emitted describing the type of operation, the target reference, and
the relationship
description, stating that the operation will not take place.'

Do you think it is appropriate to skip the warning if the object isn't
in the session *and* the save-update cascade is disabled?

With *both* conditions present, it would seem the user has arranged
them as such intentionally (at least in my case).

It basically means it's doing a flush and outright ignoring an object that 
you've assigned to a collection or scalar.   I don't know that the UOW should 
ever break it's behavioral contract that badly and not at least say something.


  Either that or they
messed up twice (I suppose that is the more conservative approach,
huh?)

The save-update cascade is often disabled so that someone can have control over 
when an object gets placed into the session.In that case they want to still 
be adding objects in manually, so actually when save-update is turned off is 
when the warning does actually become very important.


Possibly consider a different warning for that case?  I like the idea
of raising this warning, but only if one or the other conditions
exist, but not both.

so you're not only working with detached objects, getting them to load things 
despite their being detached, which I've maintained is not how the ORM was 
intended to be used, now you're also trying to mix and match them with 
collections that are being flushed ?Can I get a reminder what the use case 
here is again ?   I hope this isn't all just for performance
Haha!  I sense you're getting irritated, Mike!  It's got little to do 
with performance... it's more a serialization artifact.  To keep our web 
server truly stateless (well, 99.9% stateless), our objects are stored 
in memory client side until the user clicks save, then they are 
serialized and sent to python, deserialized, turned into transient sa 
objects and merged into the session.  There are certain objects that 
aren't placed in the session because they aren't in the merge cascade 
(purposefully).  These are transient objects I think that's where 
I'm hitting this warning.


It's been working quite well for about a year since we've been live.  
We've a lot of work to do (oracle 8 and all), and sqlalchemy is 
beautiful toolset to make it work.  It's a toolset to build an orm, 
right?  Not necessarily an orm out of the box to fit every need.  So 
don't be offended when I hack at it a bit (trust me, I try to work 
within the public API as it was intended to be used, and I'm the only 
one in the company who it goes against my grain to hack... the other 
guys would have trashed it by now!  I'm the one on your side, believe it 
or not! ;)


I don't mind if you don't want to change the warnings, just wanted your 
input whether it made sense.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: 0.7 event migration

2011-12-27 Thread Kent
 So see what happens if you, for the moment, just monkeypatch over 
 orm.session._state_session to do a lookup in a global context if 
 state.session_id isn't set.  If that solves the problem of I want detached 
 objects to load stuff, for you and everyone else who wants this feature, 
 then whatever - I'm not at all thrilled about this use case but if it's just 
 one trivial hook that I don't need to encourage, then there you go.


Please give me a lesson in monkeypatching 101.  This isn't being
invoked:

=
import sqlalchemy.orm.session as session_module
sqla_state_session = session_module._state_session
def _state_session(state):

for transient/detached objects, so we can automatically query
these related objects

return sqla_state_session(state) or DBSession()
setattr(session_module, '_state_session', _state_session)
=

I presume there are already references to _state_session before I
change it.


Also, will this cause other side effects, such as obj in DBSession
reporting True when it used to report False or the orm internals being
confused by the return of this value?

Thanks again!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] 0.7 event migration

2011-12-26 Thread Kent

Documentation for AttributeImpl.callable_ still reads
optional function which generates a callable based on a parent
  instance, which produces the default values for a scalar or
  collection attribute when it's first accessed, if not present
  already.
But it seems it is no longer the function which generates a callable, 
but rather /is /the callable itself now, accepting both the state and 
the passive parameters.


It used to be two stages, first callable_ accepts a state and then that 
returns a callable which accepted the passive parameter.


Can you briefly summarize how this is meant to work now? (I think the 
doc string is wrong now??)


On 12/25/2011 10:31 AM, Michael Bayer wrote:

yes a few change names, reconstruct_instance, init_instance, init_failed.

On Dec 24, 2011, at 7:42 PM, Kent Bower wrote:


Right.  And reconstruct_instance() was renamed load()?

On 12/24/2011 5:56 PM, Michael Bayer wrote:

On Dec 24, 2011, at 10:04 AM, Kent wrote:


As the migration guide suggests, I'd like to embrace the events API.
Is mapper event load() invoked at exactly the same place as the
deprecated reconstruct_instance() ?

yeah nothing has been moved.   All the places where the internals would call 
XXXExtension.xxx_event() were just replaced with self.dispatch.xxx_event(), and 
the old Extension classes are invoked via an adapter to the new system.   All 
unit tests for the extension system remain in place and haven't been modified.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



  1   2   3   4   >