[sqlalchemy] Re: moving an object

2009-04-18 Thread jean-philippe dutreve

Hi, any chance to have a fix for this?

On 6 avr, 17:16, Michael Bayer mike...@zzzcomputing.com wrote:
 OK in fact this can possibly be implemented if the initiator passed
 during attribute mutation operations consisted of not just an
 AttributeImpl but also a target instance, so that append/remove/set
 operations can have the information they need continue down the chain of
 events without exiting prematurely.  Such as this test below:

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class Parent(Base):
 __tablename__ = 'parent'

 id = Column(Integer, primary_key=True)
 stuff = relation(Stuff, backref=parent)

 class Stuff(Base):
 __tablename__ = 'stuff'

 id = Column(Integer, primary_key=True)
 parent_id = Column(Integer, ForeignKey('parent.id'))

 p1 = Parent()
 p2 = Parent()
 s1 = Stuff()

 p1.stuff.append(s1)
 p2.stuff.append(s1)
 assert s1.parent is p2
 assert s1 not in p1.stuff
 assert s1 in p2.stuff

 can be made to pass if we say this:

 Index: lib/sqlalchemy/orm/attributes.py
 ===
 --- lib/sqlalchemy/orm/attributes.py(revision 5901)
 +++ lib/sqlalchemy/orm/attributes.py(working copy)
 @@ -679,9 +679,6 @@
  collection.append_with_event(value, initiator)

  def remove(self, state, value, initiator, passive=PASSIVE_OFF):
 -if initiator is self:
 -return
 -
  collection = self.get_collection(state, passive=passive)
  if collection is PASSIVE_NORESULT:
  self.fire_remove_event(state, value, initiator)

 so some more complete way of not exiting the event loop too soon would
 need to be implemented.

 Jason, any comments on this ?

 jean-philippe dutreve wrote:

  It would be fine/safe that accountA has entry removed BEFORE any
  reload (with explicit refresh/expire/commit). I can't remember, but a
  previous version of SA had this behavior.

  On Apr 6, 4:42 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  im slightly confused.  the backref should be automatically reparenting,
  not sure if ordering_list interferes with that, but in any case after
  you
  flush()/expire() or commit(), it will definitely happen since all
  collections will load fresh.

  Mike Conley wrote:
   So, we would like SA to have some kind of operation like
  reparent_item()
   that would move anobjectfrom one relation to another.
   It seems to me that this is is better handled as a piece of
  application
   business logic. In this case, provide a move_entry() function that
   properly encapsulates inserting and removing the entry in a single
   operation. I can imagine that there would be many variations on
  business
   rules formovingan item that would be difficult to encapsulate in a
   common
   operation within SA.

   --
   Mike Conley

   On Mon, Apr 6, 2009 at 2:10 AM, jean-philippe dutreve
   jdutr...@gmail.comwrote:

   Currently, I use accountA.remove(entry) and I have rewritten insort
  to
   bypass the bug you say.

   So, AFAIK, whereas an entry has only one account (via
   entry.account_id), SA can't remove the first relation.
   It's dangerous, because if developer forget to remove the first
   relation, the entry is contained in 2 accounts temporaly.
   It can lead to false computation (when summing balance for instance).

   On 5 avr, 22:03, jason kirtland j...@discorporate.us wrote:
jean-philippe dutreve wrote:
 Hi all,

 I wonder if SA can handle this use case:

 An Account can contain Entries ordered by 'position' attribute.

 mapper(Account, table_accounts, properties = dict(
 entries = relation(Entry, lazy=True,
   collection_class=ordering_list
 ('position'),
 order_by=[table_entries.c.position],
 passive_deletes='all', cascade='save-update',
 backref=backref('account', lazy=False),
 ),
 ))

 I'd like to move an entry from accountA to accountB and let SA
   remove
 the link between the entry and accountA:

 entry = accountA.entries[0]
 insort_right(accountB.entries, entry)
 assert not entry in accountA.entries# false, entry is
  still
   in
 accountA 

 It is possible?

Try removing the entry from accountA:

 entry = accountA.pop(0)
 ...

Also beware that bisect insort has a bug that prevents it from
  working
properly with list subclasses like ordering_list (or any SA
  list-based
collection).  I think it's fixed in Python 3.0, not sure if the fix
   was
backported to 2.x.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options

[sqlalchemy] Re: moving an object

2009-04-06 Thread jean-philippe dutreve

Currently, I use accountA.remove(entry) and I have rewritten insort to
bypass the bug you say.

So, AFAIK, whereas an entry has only one account (via
entry.account_id), SA can't remove the first relation.
It's dangerous, because if developer forget to remove the first
relation, the entry is contained in 2 accounts temporaly.
It can lead to false computation (when summing balance for instance).

On 5 avr, 22:03, jason kirtland j...@discorporate.us wrote:
 jean-philippe dutreve wrote:
  Hi all,

  I wonder if SA can handle this use case:

  An Account can contain Entries ordered by 'position' attribute.

  mapper(Account, table_accounts, properties = dict(
  entries = relation(Entry, lazy=True, collection_class=ordering_list
  ('position'),
  order_by=[table_entries.c.position],
  passive_deletes='all', cascade='save-update',
  backref=backref('account', lazy=False),
  ),
  ))

  I'd like to move an entry from accountA to accountB and let SA remove
  the link between the entry and accountA:

  entry = accountA.entries[0]
  insort_right(accountB.entries, entry)
  assert not entry in accountA.entries# false, entry is still in
  accountA 

  It is possible?

 Try removing the entry from accountA:

  entry = accountA.pop(0)
  ...

 Also beware that bisect insort has a bug that prevents it from working
 properly with list subclasses like ordering_list (or any SA list-based
 collection).  I think it's fixed in Python 3.0, not sure if the fix was
 backported to 2.x.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: moving an object

2009-04-06 Thread jean-philippe dutreve

The object doesn't move from one relation to another : this is the
same relation 'entries' but on a different parent. This is common to
any parent.child pattern, not a business specific case.

The current behavior is not consistent because as soon as we commit()
+ refresh(), the object is not on accountA anymore (entry.account_id
has changed). This end result should be reflected in memory just after
the change too (i.e. before commit).

On Apr 6, 1:10 pm, Mike Conley mconl...@gmail.com wrote:
 So, we would like SA to have some kind of operation like reparent_item()
 that would move an object from one relation to another.
 It seems to me that this is is better handled as a piece of application
 business logic. In this case, provide a move_entry() function that
 properly encapsulates inserting and removing the entry in a single
 operation. I can imagine that there would be many variations on business
 rules for moving an item that would be difficult to encapsulate in a common
 operation within SA.

 --
 Mike Conley

 On Mon, Apr 6, 2009 at 2:10 AM, jean-philippe dutreve 
 jdutr...@gmail.comwrote:



  Currently, I use accountA.remove(entry) and I have rewritten insort to
  bypass the bug you say.

  So, AFAIK, whereas an entry has only one account (via
  entry.account_id), SA can't remove the first relation.
  It's dangerous, because if developer forget to remove the first
  relation, the entry is contained in 2 accounts temporaly.
  It can lead to false computation (when summing balance for instance).

  On 5 avr, 22:03, jason kirtland j...@discorporate.us wrote:
   jean-philippe dutreve wrote:
Hi all,

I wonder if SA can handle this use case:

An Account can contain Entries ordered by 'position' attribute.

mapper(Account, table_accounts, properties = dict(
    entries = relation(Entry, lazy=True, collection_class=ordering_list
('position'),
        order_by=[table_entries.c.position],
        passive_deletes='all', cascade='save-update',
        backref=backref('account', lazy=False),
    ),
))

I'd like to move an entry from accountA to accountB and let SA remove
the link between the entry and accountA:

    entry = accountA.entries[0]
    insort_right(accountB.entries, entry)
    assert not entry in accountA.entries    # false, entry is still in
accountA 

It is possible?

   Try removing the entry from accountA:

        entry = accountA.pop(0)
        ...

   Also beware that bisect insort has a bug that prevents it from working
   properly with list subclasses like ordering_list (or any SA list-based
   collection).  I think it's fixed in Python 3.0, not sure if the fix was
   backported to 2.x.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: moving an object

2009-04-06 Thread jean-philippe dutreve

It would be fine/safe that accountA has entry removed BEFORE any
reload (with explicit refresh/expire/commit). I can't remember, but a
previous version of SA had this behavior.

On Apr 6, 4:42 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 im slightly confused.  the backref should be automatically reparenting,
 not sure if ordering_list interferes with that, but in any case after you
 flush()/expire() or commit(), it will definitely happen since all
 collections will load fresh.

 Mike Conley wrote:
  So, we would like SA to have some kind of operation like reparent_item()
  that would move an object from one relation to another.
  It seems to me that this is is better handled as a piece of application
  business logic. In this case, provide a move_entry() function that
  properly encapsulates inserting and removing the entry in a single
  operation. I can imagine that there would be many variations on business
  rules for moving an item that would be difficult to encapsulate in a
  common
  operation within SA.

  --
  Mike Conley

  On Mon, Apr 6, 2009 at 2:10 AM, jean-philippe dutreve
  jdutr...@gmail.comwrote:

  Currently, I use accountA.remove(entry) and I have rewritten insort to
  bypass the bug you say.

  So, AFAIK, whereas an entry has only one account (via
  entry.account_id), SA can't remove the first relation.
  It's dangerous, because if developer forget to remove the first
  relation, the entry is contained in 2 accounts temporaly.
  It can lead to false computation (when summing balance for instance).

  On 5 avr, 22:03, jason kirtland j...@discorporate.us wrote:
   jean-philippe dutreve wrote:
Hi all,

I wonder if SA can handle this use case:

An Account can contain Entries ordered by 'position' attribute.

mapper(Account, table_accounts, properties = dict(
    entries = relation(Entry, lazy=True,
  collection_class=ordering_list
('position'),
        order_by=[table_entries.c.position],
        passive_deletes='all', cascade='save-update',
        backref=backref('account', lazy=False),
    ),
))

I'd like to move an entry from accountA to accountB and let SA
  remove
the link between the entry and accountA:

    entry = accountA.entries[0]
    insort_right(accountB.entries, entry)
    assert not entry in accountA.entries    # false, entry is still
  in
accountA 

It is possible?

   Try removing the entry from accountA:

        entry = accountA.pop(0)
        ...

   Also beware that bisect insort has a bug that prevents it from working
   properly with list subclasses like ordering_list (or any SA list-based
   collection).  I think it's fixed in Python 3.0, not sure if the fix
  was
   backported to 2.x.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] moving an object

2009-04-05 Thread jean-philippe dutreve

Hi all,

I wonder if SA can handle this use case:

An Account can contain Entries ordered by 'position' attribute.

mapper(Account, table_accounts, properties = dict(
entries = relation(Entry, lazy=True, collection_class=ordering_list
('position'),
order_by=[table_entries.c.position],
passive_deletes='all', cascade='save-update',
backref=backref('account', lazy=False),
),
))

I'd like to move an entry from accountA to accountB and let SA remove
the link between the entry and accountA:

entry = accountA.entries[0]
insort_right(accountB.entries, entry)
assert not entry in accountA.entries# false, entry is still in
accountA 

It is possible?

Thank you,
jean-philippe
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ordering_list performance

2008-09-23 Thread jean-philippe dutreve

My use case is a bit different : new_entries can be placed everywhere
into the existing SA list, not only at the end (actually it depends on
the entry date).

On 22 sep, 21:20, jason kirtland [EMAIL PROTECTED] wrote:
 Ah, looking more closely i see you're replacing self.entries with a
 list, not insorting into a SA list collection- that's totally ok.  It
 might squeeze a little more speed out to do:

 updated_entries = list(self.entries) + new_entries
 base = len(self.entries)
 for idx, entry in enumerate(new_entries):
 entry.position = base + idx
 self.entries = updated_entries

 orderinglist's extend method could be made to do something much like the
 above quite efficiently.

 jason kirtland wrote:
  A warning: that depends on a bug in the C version of bisect.  When given
  a list subclass, it mistakenly ignores the subclass method
  implementations.  The below will break, if and when that's fixed to
  match the pure Python implementation in the standard lib.

  Calling list.extend(account_entries, new_entries) is probably a safe
  alternative.

  *http://bugs.python.org/issue3935

  jean-philippe dutreve wrote:
  What I've done is something like this:

  account_entries = self.entries[:]
  for entry in new_entries:
  insort_right(account_entries, entry)
  for i, entry in enumerate(account_entries):
  entry.position = i
  self.entries = account_entries

  Don't know if it's the right way to do it but it's much faster.

  On 22 sep, 18:41, jason kirtland [EMAIL PROTECTED] wrote:
  I'm sure there is potential for improvement on the current orderinglist
  code- please feel free to send a patch with optimizations you've found
  to the SA trac.

  The orderinglist hasn't changed much since 0.3, but with 0.5 there may
  be entirely new implementations possible.  For example, I could imagine
  one that defers calculation and manipulation of the positioning
  information until a before_flush hook.  That may be perform better, with
  the trade-off that the position attribute can't be trusted to be in sync
  with the list order.

  jean-philippe dutreve wrote:
  Below is the profiling of code that added 1200 items into an
  ordering_list relation. I had to bypass the ordering_list stuff for
  bulk additions in order to have better performance (down to 2
  seconds).
  Hope this post helps to improve this part (using 0.5.0rc1, python 2.5,
  linux i686, 1.5Go RAM)
  SA is rocking!
  jean-philippe
  Time elapsed:  48.4475638866 s
   8875046 function calls (8869157 primitive calls) in 48.443
  CPU seconds
 Ordered by: internal time, call count
 List reduced from 390 to 10 due to restriction 10
 ncalls  tottime  percall  cumtime  percall
  filename:lineno(function)
  1292937/12922507.8790.000   12.1340.000 attributes.py:
  132(__get__)
12410137.6620.000   39.8360.000 orderinglist.py:
  221(_order_entity)
12410135.8700.000   16.9160.000 orderinglist.py:
  202(_get_order_value)
 4408094.5220.0009.5270.000 attributes.py:394(set)
   12364.1980.003   44.0250.036 orderinglist.py:
  208(reorder)
  1299736/12990483.7520.0004.3730.000 attributes.py:
  310(get)
 4482253.3370.0005.1570.000 identity.py:
  208(modified_event)
 4370612.7040.000   14.3310.000 orderinglist.py:
  205(_set_order_value)
 4408092.2250.000   11.7520.000 attributes.py:
  126(__set__)
 4482251.7750.0001.8120.000 attributes.py:
  958(modified_event)
  Function   was called by...
  attributes.py:132(__get__) - domain.py:200(addEntry)
  (1236)   46.741
domain.py:248(__init__)
  (1236)   47.832
domain.py:272(get)(49452)
  0.609
orderinglist.py:
  202(_get_order_value)(1241013)   16.916
  orderinglist.py:221(_order_entity) - orderinglist.py:208(reorder)
  (1240326)   44.025
orderinglist.py:232(append)
  (687)0.013
  orderinglist.py:202(_get_order_value)  - orderinglist.py:
  221(_order_entity)(1241013)   39.836
  attributes.py:394(set) - attributes.py:126(__set__)
  (440809)   11.752
  orderinglist.py:208(reorder)   - orderinglist.py:
  266(__setslice__)(1236)   44.061
  attributes.py:310(get) - attributes.py:132(__get__)
  (1292937)   12.134
attributes.py:
  347(get_committed_value)(1)0.000
attributes.py:500(set)
  (3708)0.367
attributes.py:
  837(value_as_iterable)(3090)0.108
  identity.py:208(modified_event)- attributes.py:394(set)
  (440809)9.527

[sqlalchemy] Re: ordering_list performance

2008-09-22 Thread jean-philippe dutreve

What I've done is something like this:

account_entries = self.entries[:]
for entry in new_entries:
insort_right(account_entries, entry)
for i, entry in enumerate(account_entries):
entry.position = i
self.entries = account_entries

Don't know if it's the right way to do it but it's much faster.

On 22 sep, 18:41, jason kirtland [EMAIL PROTECTED] wrote:
 I'm sure there is potential for improvement on the current orderinglist
 code- please feel free to send a patch with optimizations you've found
 to the SA trac.

 The orderinglist hasn't changed much since 0.3, but with 0.5 there may
 be entirely new implementations possible.  For example, I could imagine
 one that defers calculation and manipulation of the positioning
 information until a before_flush hook.  That may be perform better, with
 the trade-off that the position attribute can't be trusted to be in sync
 with the list order.

 jean-philippe dutreve wrote:
  Below is the profiling of code that added 1200 items into an
  ordering_list relation. I had to bypass the ordering_list stuff for
  bulk additions in order to have better performance (down to 2
  seconds).
  Hope this post helps to improve this part (using 0.5.0rc1, python 2.5,
  linux i686, 1.5Go RAM)

  SA is rocking!
  jean-philippe

  Time elapsed:  48.4475638866 s
           8875046 function calls (8869157 primitive calls) in 48.443
  CPU seconds

     Ordered by: internal time, call count
     List reduced from 390 to 10 due to restriction 10

     ncalls  tottime  percall  cumtime  percall
  filename:lineno(function)
  1292937/1292250    7.879    0.000   12.134    0.000 attributes.py:
  132(__get__)
    1241013    7.662    0.000   39.836    0.000 orderinglist.py:
  221(_order_entity)
    1241013    5.870    0.000   16.916    0.000 orderinglist.py:
  202(_get_order_value)
     440809    4.522    0.000    9.527    0.000 attributes.py:394(set)
       1236    4.198    0.003   44.025    0.036 orderinglist.py:
  208(reorder)
  1299736/1299048    3.752    0.000    4.373    0.000 attributes.py:
  310(get)
     448225    3.337    0.000    5.157    0.000 identity.py:
  208(modified_event)
     437061    2.704    0.000   14.331    0.000 orderinglist.py:
  205(_set_order_value)
     440809    2.225    0.000   11.752    0.000 attributes.py:
  126(__set__)
     448225    1.775    0.000    1.812    0.000 attributes.py:
  958(modified_event)

  Function                               was called by...
  attributes.py:132(__get__)             - domain.py:200(addEntry)
  (1236)   46.741
                                            domain.py:248(__init__)
  (1236)   47.832
                                            domain.py:272(get)(49452)
  0.609
                                            orderinglist.py:
  202(_get_order_value)(1241013)   16.916
  orderinglist.py:221(_order_entity)     - orderinglist.py:208(reorder)
  (1240326)   44.025
                                            orderinglist.py:232(append)
  (687)    0.013
  orderinglist.py:202(_get_order_value)  - orderinglist.py:
  221(_order_entity)(1241013)   39.836
  attributes.py:394(set)                 - attributes.py:126(__set__)
  (440809)   11.752
  orderinglist.py:208(reorder)           - orderinglist.py:
  266(__setslice__)(1236)   44.061
  attributes.py:310(get)                 - attributes.py:132(__get__)
  (1292937)   12.134
                                            attributes.py:
  347(get_committed_value)(1)    0.000
                                            attributes.py:500(set)
  (3708)    0.367
                                            attributes.py:
  837(value_as_iterable)(3090)    0.108
  identity.py:208(modified_event)        - attributes.py:394(set)
  (440809)    9.527
                                            attributes.py:
  525(fire_replace_event)(3708)    0.236
                                            attributes.py:
  579(fire_append_event)(3708)    1.960
  orderinglist.py:205(_set_order_value)  - orderinglist.py:
  221(_order_entity)(437061)   39.836
  attributes.py:126(__set__)             - domain.py:
  237(_set_attributes)(1276)    0.079
                                            domain.py:255(update)
  (2472)    0.089
                                            orderinglist.py:
  205(_set_order_value)(437061)   14.331
  attributes.py:958(modified_event)      - identity.py:
  208(modified_event)(448225)    5.157
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQL mass-delete

2008-05-13 Thread jean-philippe dutreve

I'd like to delete all Transactions contained in an account hierarchy
without loading any transaction into memory, just DB work with the SQL
DELETE request constructed by SA.

The query that defines the transactions is:
Session.query(Transaction).join(['entries','account','root'],
aliased=True).filter_by(account_id=1).all()

How can I use it to construct and execute the DELETE statement?

Thanks for any help.
jean-philippe
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQL mass-delete

2008-05-13 Thread jean-philippe dutreve

fine. thank you for your help.
jean-philippe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: To select a tree with PG connectby() in a single request

2008-04-18 Thread jean-philippe dutreve

After debugging, i've noticed that the issue is related to eager
loaded
relations. If you try the example script with _descendants relation
having lazy=None or True, then the extension method is not called
anymore.

Is there a way to fire the extension method even without eadger
loading?

 i cant see any problem from what I see here.  The example script
 definitely works and append_result is called, so just try to see
 what's different here vs. that script, and try stepping through with
 pdb for more detail.  If all else fails, send along a full reproducing
 test case.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: To select a tree with PG connectby() in a single request

2008-04-18 Thread jean-philippe dutreve

Thank you for your support. You have done an awesome work overall.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] To select a tree with PG connectby() in a single request

2008-04-17 Thread jean-philippe dutreve

Hi all,

I'm trying to load a whole Tree of Account objects (Mapped instances)
in a single SELECT with unlimited depth.
I'm using PostgreSQL connectby function from the tablefunc module.
It returns rows of each nodes in a depth first visit.

sql = 
SELECT acc_accounts.* FROM connectby('acc_accounts', 'account_id',
'parent_id', 'name', '%s', 0)
AS t(keyid int, parent_keyid int, level int, name int),
acc_accounts where keyid = account_id


accounts = Session.query(Account).from_statement(sql %
root_account_id).all()

After that, I try so iterate over the result list in order to set the
account.children relation myself.
But the problem is that as soon as I initialize the relation, a SQL
select is issued:

account.children = []

How can I avoid these unnecessary selects on all nodes?
Is there a better way to let SA populate the relation with the rows
returned?


Thanks for help.
jean-philippe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: To select a tree with PG connectby() in a single request

2008-04-17 Thread jean-philippe dutreve

Thank you for the suggestion but the extension method doesn't fired,
even without raw sql:

mapper(Account, table_accounts, extension=AccountLoader(),
properties=dict(
children = relation(Account, lazy=None,
 
primaryjoin=table_accounts.c.parent_id==table_accounts.c.account_id,
#cascade=all,
collection_class=attribute_mapped_collection('name'),
backref=backref('parent',
 
primaryjoin=table_accounts.c.parent_id==table_accounts.c.account_id,
 
remote_side=table_accounts.c.account_id)
),
chart = relation(Account, lazy=None, uselist=False,
post_update=True,
 
primaryjoin=table_accounts.c.chart_id==table_accounts.c.account_id,
 remote_side=table_accounts.c.account_id,
 backref=backref('descendants', lazy=None,
join_depth=1,
 
primaryjoin=table_accounts.c.chart_id==table_accounts.c.account_id,
 viewonly=True)
 ),

class AccountLoader(MapperExtension):
def append_result(self, mapper, selectcontext, row, instance,
result, **flags):
isnew = flags.get('isnew', False)
if instance.parent_id is None:
result.append(instance)
else:
if isnew or selectcontext.populate_existing:
key =
mapper.identity_key_from_primary_key(instance.parent_id)
parentnode = selectcontext.session.identity_map[key]
parentnode.children.append(instance)
return False

accounts = Session.query(Account).filter(Account.c.chart_id==1).all()

How can I fire the extension callback without eager loading
descendants?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] eagerload_all issue

2007-09-11 Thread Jean-Philippe Dutreve

Here's my issue: 3 tables

CREATE TABLE accounts (
account_id serial PRIMARY KEY,
name varchar(16) NOT NULL UNIQUE,
);
CREATE TABLE transactions (
transaction_id serial PRIMARY KEY,
);
CREATE TABLE entries (
entry_id serial PRIMARY KEY,
account_id integer NOT NULL REFERENCES accounts,
transaction_id integer NOT NULL REFERENCES transactions,
);

A Transaction links 1 Account to another one with Entries:

mapper(Account, table_accounts)
mapper(Transaction, table_transactions)
mapper(Entry, table_entries, properties = dict(
account = relation(Account, uselist=False,
backref=backref('entries', lazy=True)),
transaction = relation(Transaction, uselist=False,
backref=backref('entries', lazy=False, join_depth=3)),
))

I just want to retrieve in one SELECT all tx engaged and the account
of each entry:
acc =
session.query(Account).options(eagerload_all('entries.transaction.entries.account')).get(7)
acc.entries[1].transaction.entries[1].account.name == execute a new
SELECT to retrieve all entries of this account (I just want
account.name


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] eagerload_all issue

2007-09-11 Thread Jean-Philippe Dutreve

Here's my issue: 3 tables

CREATE TABLE accounts (
account_id serial PRIMARY KEY,
name varchar(16) NOT NULL UNIQUE,
);
CREATE TABLE transactions (
transaction_id serial PRIMARY KEY,
);
CREATE TABLE entries (
entry_id serial PRIMARY KEY,
account_id integer NOT NULL REFERENCES accounts,
transaction_id integer NOT NULL REFERENCES transactions,
);

A Transaction links Account together with Entries, Transaction should
eager load its entries and their account.

mapper(Account, table_accounts)
mapper(Transaction, table_transactions)
mapper(Entry, table_entries, properties = dict(
account = relation(Account, uselist=False,
backref=backref('entries', lazy=True)),
transaction = relation(Transaction, uselist=False,
backref=backref('entries', lazy=False, join_depth=3)),
))

I just want to retrieve in one SELECT all tx engaged and the
account.name of each entry:

acc =
session.query(Account).options(eagerload_all('entries.transaction.entries.account')).get(7)
acc.entries[1].transaction.entries[1].account.name == execute a new
SELECT to retrieve all entries of this account (different than
account_id=7)

SELECT *
FROM (SELECT accounts.account_id AS accounts_account_id,
accounts.account_id AS accounts_oid FROM jdu.accounts WHERE
accounts.account_id = 77 ORDER BY accounts.account_id  LIMIT 1 OFFSET
0) AS tbl_row_count, jdu.accounts
LEFT OUTER JOIN jdu.entries AS entries_3 ON accounts.account_id =
entries_3.account_id
LEFT OUTER JOIN jdu.transactions AS transactions_1 ON
transactions_1.transaction_id = entries_3.transaction_id
LEFT OUTER JOIN jdu.entries AS entries_2 ON
transactions_1.transaction_id = entries_2.transaction_id
WHERE accounts.account_id = tbl_row_count.accounts_account_id ORDER BY
tbl_row_count.accounts_account_id, entries_3.entry_id,
transactions_1.transaction_id, entries_2.entry_id


But I don't need any of entries of account 77, I just want its name
(to be sent on a remote web client).

Thanks for any help,
jp


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eagerload_all issue

2007-09-11 Thread Jean-Philippe Dutreve

Ive uploaded the script eagerload_all.py that reproduce the issue.
Hope it helps you.

On 11 sep, 16:43, Michael Bayer [EMAIL PROTECTED] wrote:
 On Sep 11, 2007, at 10:28 AM, Jean-Philippe Dutreve wrote:



  The name is on account, not on entry.
  Transactions and all must be loaded in one shot starting from a single
  account:

  account (e.g. id=7)
   == all its entries
  === one Transaction for each entry == all entries of each
  transaction (some are different than first ones) == the account of
  each entry

  I need all of this data retrieved in a single SQL SELECT, to be sent
  in a web page for listing the content of an account.
  If possible, i don't want to execute a separate select ACCOUNT.

 soaccount-entries-transaction-entries-account

 and youre saying, that the account at the very end is eager loading
 onto entries again ?   0.3 definitely should not do that, in 0.4 i
 dont think it should either but thats newer code so i can see how
 that *might* be the case (but im skeptical, because it really would
 just go into an endless loop when it sets up the query if it didnt
 know to stop).

 it would be helpful if you could package your tables and mappers
 below into a small test script that runs against SQLite.  if the bug
 is there then your script gets adapted into a new unit test.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eagerload_all issue

2007-09-11 Thread Jean-Philippe Dutreve

 its actually not eager loading the second list of accounts
If there is no eager loading on the second list, I don't understand
why a 'SELECT entries ...' is executed when I just
ask account.name and not account.entries.

 untested, i.e. join_depth on a mapper thats not self-referential, im
 surprised thats actually doing something.
Sign of good design?

 basically the way youre trying to get it to eager load *just* the
 accounts on the entries on the transaction, and *not* on the entries
 off the account itself, is entirely something that has never been
 attempted before.  So while I will add a trac ticket for this (#777),
 more expedient for now would be to construct the exact query you want
 and apply it using from_statement() in conjunction with the
 contains_eager() option to indicate the eagerly loaded relations.
Thanks for your help.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: on delete restrict (bis)

2007-09-09 Thread Jean-Philippe Dutreve

Another solution could be to inverse the order:
- first delete the parent (so the rule RESTRICT is immediately fired)
- second set null the FKs.

On 8 sep, 19:52, Michael Bayer [EMAIL PROTECTED] wrote:
 On Sep 8, 2007, at 12:54 PM, Jean-Philippe Dutreve wrote:



  My need is related to Postgresql ON DELETE RESTRICT/NO ACTION : I'd
  want a sql exception as soon as a parent having any existing child is
  deleted. I don't want cascade delete on children, just the parent but
  only if it has no child.

  I've remarked that SA (0.4) first SET NULL all FKs in child table, and
  second delete the parent. Doing this in that order, the PG rule is not
  called and the parent is deleted even if there are children (now
  orphaned)!!!

  The only solution I have found is to define the FK as NOT NULL.
  It would be handy to be able to let this PG rule be fired.
  Perhaps with an option cascade=delete-donothing on the child relation.

 the ORM is hardwired to a referential integrity model right now that
 assumes foreign keys are to be maintained as valid.  therefore as
 long as theres a relation() present, its going to want to either null
 out the foreign key or to delete the child items.  There are some
 options that can affect  this, such as viewonly=True gives you a
 relation that is only for loading, and passive-deletes=True will give
 you a relation that doesnt load in unloaded objects in order to
 update foreign keys (relying instaed upon ON DELETE CASCADE), but
 still acts upon objects already loaded.  you could just use
 viewonly=True but that means you have to populate foreign key
 attributes manually.

 we can look into adding an option to not act on FKS at all during a
 delete operation but it might be a little involved.  (adding trac
 tickets would be the route for this)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] on delete restrict (bis)

2007-09-08 Thread Jean-Philippe Dutreve

My need is related to Postgresql ON DELETE RESTRICT/NO ACTION : I'd
want a sql exception as soon as a parent having any existing child is
deleted. I don't want cascade delete on children, just the parent but
only if it has no child.

I've remarked that SA (0.4) first SET NULL all FKs in child table, and
second delete the parent. Doing this in that order, the PG rule is not
called and the parent is deleted even if there are children (now
orphaned)!!!

The only solution I have found is to define the FK as NOT NULL.
It would be handy to be able to let this PG rule be fired.
Perhaps with an option cascade=delete-donothing on the child relation.

jp


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: bisect.insort

2007-09-07 Thread Jean-Philippe Dutreve

Thanks Jason for your clear explanation.
Is there any mean to do your suggestion to call the pure Python
version without coping/pasting it into my module?

On 7 sep, 16:28, jason kirtland [EMAIL PROTECTED] wrote:
 Jean-Philippe Dutreve wrote:
  I was using SA 0.3.9 to insert an item in an ordered list with bisect
  method insort (py 2.5):

  mapper(Entry, table_entries)
  mapper(Account, table_accounts, properties = dict(
  entries = relation(Entry, lazy=True,
  backref=backref('account', lazy=False),
  collection_class=ordering_list('position'),
  order_by=[table_entries.c.position])
  ))
  bisect.insort(account.entries, an_entry)

  This is not working anymore with SA 0.4 beta5 : the list owns the item
  but not the other way.
   assert account.entries[0] is an_entry  # TRUE
   assert an_entry.account is account  # FALSE, currently is None

  Remark: it's working if I copy/paste the bisect method in my module.

 This is a Python bug: the C version of insort ignores overridden
 'insert' methods on classes that derive from list, bypassing
 SQLAlchemy's collection hooks.

 In prior SQLAlchemy versions, collections weren't real lists and insort
 does handle that case properly.  I'd suggest using the pure Python
 versions of the bisect functions going forward.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Changeset 2795

2007-09-07 Thread Jean-Philippe Dutreve

It seems that the bug fixed by changeset 2795 (column_prefix with
synonym) is still active in 0.4 branch.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---