Re: [sqlalchemy] Dynamic query

2011-05-09 Thread Enrico Morelli
On Fri, 6 May 2011 17:11:39 +0100
King Simon-NFHD78 simon.k...@motorolasolutions.com wrote:

  -Original Message-
  From: sqlalchemy@googlegroups.com
  [mailto:sqlalchemy@googlegroups.com] On Behalf Of Enrico Morelli
  Sent: 06 May 2011 16:20
  To: sqlalchemy
  Subject: [sqlalchemy] Dynamic query
  
  Dear all,
  
  I've a form where people fill one or more fields to search in a db.
  For the moment I solve it using a lot of if statement and a lot of
  different query based on the filled fields. Something like that:
  
  if start_date and end_date and instrument and details and
  technician: c.results =
  
  Session.query(Repairs).filter(and_(Repairs.start_date=start_date,
  Repairs.end_date=end_date,
  Repairs.instrument_id==instrument,
  Repairs.details.like('%%%s%%' % details),
  Repairs.technician.like('%%%s%%' % technician)
  )).order_by('start_date').all()
  
  elif start_date and end_date and instrument and details:
  c.results =
  
  Session.query(Repairs).filter(and_(Repairs.start_date=start_date,
  Repairs.end_date=end_date,
  Repairs.instrument_id==instrument,
  Repairs.details.like('%%%s%%' %
  details), )).order_by('start_date').all()
  
  and so on for each combination (for 5 fields I have 20 query!).
  There is
  a way to do that in a more dynamic way?
  
 
 You can call Query.filter multiple times. Here's an example:
 
 query = Session.query(Repairs)
 
 if start_date:
 query = query.filter(Repairs.start_date = start_date)
 
 if end_date:
 query = query.filter(Repairs.end_date = end_date)
 
 if instrument:
 query = query.filter(Repairs.instrument_id == instrument)
 
 # etc.
 
 results = query.order_by('start_date').all()
 
 
 Each filter condition will be combined using AND.
 
 Hope that helps,
 
 Simon
 
THANKS!!! Works very fine :-))

-- 
---
   (o_
(o_//\  Coltivate Linux che tanto Windows si pianta da solo.
(/)_   V_/_
+--+
| ENRICO MORELLI |  email: more...@cerm.unifi.it   |
| * *   *   *|  phone: +39 055 4574269 |
|  University of Florence|  fax  : +39 055 4574253 |
|  CERM - via Sacconi, 6 -  50019 Sesto Fiorentino (FI) - ITALY|
+--+

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Queries issued with 'passive_deletes'

2011-05-09 Thread Israel Ben Guilherme Fonseca
I didn't get why issuing selects for the children objects when
passive_deletes=False. Wouldn't be better just issue direct deletes, and
maybe using subselects in the where clause of these deletes (for nested
associations) when approriate? It would solve the overhead problem of the
selecting large collections, and it would mimic the ON DELETE CASCADE that
is expected to exist when using passive_delete=True for databases that don't
support this feature.

Thanks in advance for the explanation,

Israel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Queries issued with 'passive_deletes'

2011-05-09 Thread Michael Bayer

On May 9, 2011, at 7:57 AM, Israel Ben Guilherme Fonseca wrote:

 I didn't get why issuing selects for the children objects when 
 passive_deletes=False. Wouldn't be better just issue direct deletes, and 
 maybe using subselects in the where clause of these deletes (for nested 
 associations) when approriate? It would solve the overhead problem of the 
 selecting large collections, and it would mimic the ON DELETE CASCADE that is 
 expected to exist when using passive_delete=True for databases that don't 
 support this feature.
 
 Thanks in advance for the explanation,

ON DELETE CASCADE is provided by all databases that SQLAlchemy supports - see 
referential integrity in this chart:

http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems#Fundamental_features

the only exception being, MySQL MyISAM.   SQLite added foreign keys some 
releases ago (they are optional but can be enabled).

Given that ON DELETE CASCADE is already provided by all databases and should be 
used when lots of cascading deletes are needed, the ratio of usefulness to 
effort, which would be significant in that it involves a significantly more 
complex approach within the unit of work internals as well as a lot of new 
tests, doesn't place a feature like this in high priority.

It would not be possible for this behavior to be used in all cases, it would 
only be an optimizing case when its possible.Consider the case where cycles 
exist - parent-child-subchild-subsubchild, and suppose some rows in child 
reference subsubchild.   The UOW detects the potential for cycles based on 
the graph of mappings, and when it has access to all the individual rows (like 
the database does when ON DELETE CASCADE works)  breaks cycles into individual 
groups so that rows are deleted in the proper order.   A query like DELETE 
FROM subsubchild WHERE parent_id in (SELECT id from subchild where parent_id in 
(SELECT id from child where parent_id=x)) otherwise would fail.   

The current behavior also has the advantage that objects presently in the 
Session, but without their collection-based relationships loaded and linking 
them together in memory, are appropriately updated state-wise, as their 
collection membership is determined before being marked cascaded members as 
deleted after a flush.While passive_deletes=True turns this off, some 
applications with passive_deletes=False may be relying upon this.  Changing the 
cascade behavior to not be aware of individual rows when cycles don't exist 
mean that the state management of individual objects in a session will change 
based on mappings.  An application someday removes a relationship that was 
linking subsubchild to child, and suddenly the Session begins to not mark 
subsubchild objects as deleted during a cascade, instead waiting until 
commit() is called and all attributes are expired.This is a subtle side 
effect arising from seemingly unrelated mapping adjustments - this makes it 
tougher for us to make this new optimization a default behavior.Whereas the 
difference in behavior between passive_deletes=True|False is much easier to 
understand and anticipate.

So there's potential for surprises, new complexity, many more tests needed, 
feature is only an optimization, and will probably have to remain optional in 
all cases, all of which is redundant versus pretty much every database's own 
ability to do so more efficiently and predictably via ON DELETE CASCADE.And 
you can even use query.delete() if you really need to delete lots of things 
quickly and you don't have CASCADE immediately available.   

This also might be a good addition for the FAQ which is currently being cleaned 
up.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Problem with relationships and polymorphism

2011-05-09 Thread Matthias

[EDIT: Duh, forgot the attachment. Here it is.]

Hello,

I ran into a problem with relationships and polymorphism. I've attached a
test case which runs on its own and shows my models.

The version as given results in an exception for me:

ArgumentError: Could not determine join condition between parent/child
tables on relationship UserAddresses.user.  Specify a 'primaryjoin'
expression.  If 'secondary' is present, 'secondaryjoin' is needed as well.

So I go ahead and add the primaryjoins:

primaryjoin = (User.id == user_id)
primaryjoin = (Address.id == address_id)

With the primaryjoin in place the code works in 0.7b4, but it throws
another exception in 0.6.6:

ArgumentError: Could not determine relationship direction for primaryjoin
condition 'content.id = useraddresses.user_id', on relationship
UserAddresses.user. Ensure that the referencing Column objects have a
ForeignKey present, or are otherwise part of a ForeignKeyConstraint on
their parent Table, or specify the foreign_keys parameter to this
relationship.

Now my main question is: Why do I need to add the primaryjoins at all?
Shouldn't SQLAlchemy be able to generate it from the information given?.
My other question is: Is there any specific reason why its working in
0.7b4 and not in 0.6.6?.

It seems like I am missing something here.

-Matthias

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



sqlatest.py
Description: Binary data


[sqlalchemy] Problem with relationships and polymorphism

2011-05-09 Thread Matthias

Hello,

I ran into a problem with relationships and polymorphism. I've attached a  
test case which runs on its own and shows my models.


The version as given results in an exception for me:

ArgumentError: Could not determine join condition between parent/child  
tables on relationship UserAddresses.user.  Specify a 'primaryjoin'  
expression.  If 'secondary' is present, 'secondaryjoin' is needed as well.


So I go ahead and add the primaryjoins:

primaryjoin = (User.id == user_id)
primaryjoin = (Address.id == address_id)

With the primaryjoin in place the code works in 0.7b4, but it throws  
another exception in 0.6.6:


ArgumentError: Could not determine relationship direction for primaryjoin  
condition 'content.id = useraddresses.user_id', on relationship  
UserAddresses.user. Ensure that the referencing Column objects have a  
ForeignKey present, or are otherwise part of a ForeignKeyConstraint on  
their parent Table, or specify the foreign_keys parameter to this  
relationship.


Now my main question is: Why do I need to add the primaryjoins at all?  
Shouldn't SQLAlchemy be able to generate it from the information given?.
My other question is: Is there any specific reason why its working in  
0.7b4 and not in 0.6.6?.


It seems like I am missing something here.

-Matthias

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Problem with relationships and polymorphism

2011-05-09 Thread Michael Bayer

On May 9, 2011, at 11:16 AM, Matthias wrote:

 [EDIT: Duh, forgot the attachment. Here it is.]
 
 Hello,
 
 I ran into a problem with relationships and polymorphism. I've attached a
 test case which runs on its own and shows my models.
 
 The version as given results in an exception for me:
 
 ArgumentError: Could not determine join condition between parent/child
 tables on relationship UserAddresses.user.  Specify a 'primaryjoin'
 expression.  If 'secondary' is present, 'secondaryjoin' is needed as well.
 
 So I go ahead and add the primaryjoins:
 
 primaryjoin = (User.id == user_id)
 primaryjoin = (Address.id == address_id)
 
 With the primaryjoin in place the code works in 0.7b4, but it throws
 another exception in 0.6.6:
 
 ArgumentError: Could not determine relationship direction for primaryjoin
 condition 'content.id = useraddresses.user_id', on relationship
 UserAddresses.user. Ensure that the referencing Column objects have a
 ForeignKey present, or are otherwise part of a ForeignKeyConstraint on
 their parent Table, or specify the foreign_keys parameter to this
 relationship.
 
 Now my main question is: Why do I need to add the primaryjoins at all?

The selectable to which UserAddresses is mapped, that is a join of content to 
useraddresses, can join to the selectables in which User and Address are 
mapped, that is a join of content to users or addresses, in more than one 
way.   users.id mapped to User is a foreign key to content.id mapped to 
UserAddresses and useraddresses.user_id mapped to UserAddresses is a foreign 
key to users.id mapped to User.There's an argument to be made that it can 
try to make assumptions in this kind of situation, and perhaps someday such a 
feature would be added.  But  such logic would very likely be difficult to 
implement.The existing information that relationship() attempts to derive 
is already fairly complicated to perform and has taken many years to get it 
(mostly) right, but it tries to stick only to things it can be 100% sure about. 
  Assuming which foreign key to use starts to enter the realm of guessing, so 
I'm not in a hurry to add that feature.


 Shouldn't SQLAlchemy be able to generate it from the information given?.
 My other question is: Is there any specific reason why its working in
 0.7b4 and not in 0.6.6?.

when you create the primaryjoin User.id==user_id, in 0.6 this indicates 
content.id=useraddresses.user_id, as you can see the message indicates (and 
is not what you intended) - whereas in 0.7 it indicates 
users.id==useraddresses.user_id.   This was ticket #1892 and a full 
explanation is in the migration guide here: 
http://www.sqlalchemy.org/trac/wiki/07Migration#Mappedcolumnattributesreferencethemostspecificcolumnfirst


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: deferred column sometimes lazily loaded, sometimes not

2011-05-09 Thread Bill Curtis
 I dont understand the ultimate issue, unless its that you're getting the
 wrong data back.  if its just that the data is being cached instead of it
 loading deferred, then yes that's just the caching query happening.   it
 would need to be more careful about the state its placing in the cache -
 like, when the object is serialized for caching, have it expire those
 attributes you don't want in the cache.


yeah, should have been more clear.  there's no issue in the sense of any
misbehavior -- only what was (formerly) mysterious behavior.   I'm just
trying to acquire a more thorough understanding of how sqlalchemy works.

FWIW, my higher-level concerns are around how to find and invalidate objects
 in the secondary cache, once they have become dirty.

 the way the example works right now, you need to create a Query that
 represents the same cache key as one that you'd like to clear out, and
 invalidate.  if you're looking to locate objects based on identity, you'd
 probably want to change the scheme in which data is cached - probably cache
 result sets which serialize only a global identifier for each object, then
 store each object individually under those individual identifiers.   it
 would be slower on a get since it means a get for the result plus a get for
 each member, but would allow any identity to be cleared globally.the get
 for each member could be ameliorated by the fact that they'd be in the
 identity map first, before out in a cache.

 not at all simple and its why 2nd level caching is not a built in feature !
   too many ways to do it.


this is very helpful... thanks!

-bill

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Problem with relationships and polymorphism

2011-05-09 Thread Matthias

Am 09.05.2011, 17:50 Uhr, schrieb Michael Bayer mike...@zzzcomputing.com:



On May 9, 2011, at 11:16 AM, Matthias wrote:


[EDIT: Duh, forgot the attachment. Here it is.]

Hello,

I ran into a problem with relationships and polymorphism. I've attached  
a

test case which runs on its own and shows my models.

The version as given results in an exception for me:

ArgumentError: Could not determine join condition between parent/child
tables on relationship UserAddresses.user.  Specify a 'primaryjoin'
expression.  If 'secondary' is present, 'secondaryjoin' is needed as  
well.


So I go ahead and add the primaryjoins:

primaryjoin = (User.id == user_id)
primaryjoin = (Address.id == address_id)

With the primaryjoin in place the code works in 0.7b4, but it throws
another exception in 0.6.6:

ArgumentError: Could not determine relationship direction for  
primaryjoin

condition 'content.id = useraddresses.user_id', on relationship
UserAddresses.user. Ensure that the referencing Column objects have a
ForeignKey present, or are otherwise part of a ForeignKeyConstraint on
their parent Table, or specify the foreign_keys parameter to this
relationship.

Now my main question is: Why do I need to add the primaryjoins at all?


The selectable to which UserAddresses is mapped, that is a join of  
content to useraddresses, can join to the selectables in which User  
and Address are mapped, that is a join of content to users or  
addresses, in more than one way.   users.id mapped to User is a  
foreign key to content.id mapped to UserAddresses and  
useraddresses.user_id mapped to UserAddresses is a foreign key to  
users.id mapped to User.There's an argument to be made that it can  
try to make assumptions in this kind of situation, and perhaps someday  
such a feature would be added.  But  such logic would very likely be  
difficult to implement.The existing information that relationship()  
attempts to derive is already fairly complicated to perform and has  
taken many years to get it (mostly) right, but it tries to stick only to  
things it can be 100% sure about.   Assuming which foreign key to use  
starts to enter the realm of guessing, so I'm not in a hurry to add that  
feature.


Thanks for your really informative answer. I can see the point now. Maybe  
instead of deriving this information indirectly, it would be better if one  
could express it right from the start.


Shouldn't SQLAlchemy be able to generate it from the information  
given?.

My other question is: Is there any specific reason why its working in
0.7b4 and not in 0.6.6?.


when you create the primaryjoin User.id==user_id, in 0.6 this indicates  
content.id=useraddresses.user_id, as you can see the message indicates  
(and is not what you intended) - whereas in 0.7 it indicates  
users.id==useraddresses.user_id.   This was ticket #1892 and a full  
explanation is in the migration guide here:  
http://www.sqlalchemy.org/trac/wiki/07Migration#Mappedcolumnattributesreferencethemostspecificcolumnfirst


Ahh I see, SQLAlchemy is getting better every day :) Thank you.

-Matthias

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Queries issued with 'passive_deletes'

2011-05-09 Thread Israel Ben Guilherme Fonseca
Well, now it does make sense. :)

Thanks for the explanation.

2011/5/9 Michael Bayer mike...@zzzcomputing.com


 On May 9, 2011, at 7:57 AM, Israel Ben Guilherme Fonseca wrote:

  I didn't get why issuing selects for the children objects when
 passive_deletes=False. Wouldn't be better just issue direct deletes, and
 maybe using subselects in the where clause of these deletes (for nested
 associations) when approriate? It would solve the overhead problem of the
 selecting large collections, and it would mimic the ON DELETE CASCADE that
 is expected to exist when using passive_delete=True for databases that don't
 support this feature.
 
  Thanks in advance for the explanation,

 ON DELETE CASCADE is provided by all databases that SQLAlchemy supports -
 see referential integrity in this chart:


 http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems#Fundamental_features

 the only exception being, MySQL MyISAM.   SQLite added foreign keys some
 releases ago (they are optional but can be enabled).

 Given that ON DELETE CASCADE is already provided by all databases and
 should be used when lots of cascading deletes are needed, the ratio of
 usefulness to effort, which would be significant in that it involves a
 significantly more complex approach within the unit of work internals as
 well as a lot of new tests, doesn't place a feature like this in high
 priority.

 It would not be possible for this behavior to be used in all cases, it
 would only be an optimizing case when its possible.Consider the case
 where cycles exist - parent-child-subchild-subsubchild, and suppose some
 rows in child reference subsubchild.   The UOW detects the potential for
 cycles based on the graph of mappings, and when it has access to all the
 individual rows (like the database does when ON DELETE CASCADE works)
  breaks cycles into individual groups so that rows are deleted in the proper
 order.   A query like DELETE FROM subsubchild WHERE parent_id in (SELECT id
 from subchild where parent_id in (SELECT id from child where parent_id=x))
 otherwise would fail.

 The current behavior also has the advantage that objects presently in the
 Session, but without their collection-based relationships loaded and linking
 them together in memory, are appropriately updated state-wise, as their
 collection membership is determined before being marked cascaded members as
 deleted after a flush.While passive_deletes=True turns this off, some
 applications with passive_deletes=False may be relying upon this.  Changing
 the cascade behavior to not be aware of individual rows when cycles don't
 exist mean that the state management of individual objects in a session will
 change based on mappings.  An application someday removes a relationship
 that was linking subsubchild to child, and suddenly the Session begins
 to not mark subsubchild objects as deleted during a cascade, instead
 waiting until commit() is called and all attributes are expired.This is
 a subtle side effect arising from seemingly unrelated mapping adjustments -
 this makes it tougher for us to make this new optimization a default
 behavior.Whereas the difference in behavior between
 passive_deletes=True|False is much easier to understand and anticipate.

 So there's potential for surprises, new complexity, many more tests needed,
 feature is only an optimization, and will probably have to remain optional
 in all cases, all of which is redundant versus pretty much every database's
 own ability to do so more efficiently and predictably via ON DELETE CASCADE.
And you can even use query.delete() if you really need to delete lots of
 things quickly and you don't have CASCADE immediately available.

 This also might be a good addition for the FAQ which is currently being
 cleaned up.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.