[sqlalchemy] Re: single table inheritance : query a subtype and its subtypes

2009-02-16 Thread GustaV

Oh my mistake.
It works totally!

On 13 fév, 21:29, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 13, 2009, at 12:05 PM, GustaV wrote:





  Hello!

  In a configuration looking like this:
  class Parent:
    pass

  class Child(Parent):
    pass

  class ChildChild(Child):
    pass

  I would like to query the Childs and all classes that inherits it
  (here: ChildChild), but not Parent instances.
  session.query(Child).all() returns Child type only ( SQL : WHERE type
  IN (2) ), and I'd like to avoid to manually list the classes that
  inherit Child class.

 query(Child) should be returning Child and ChildChild classes - the IN  
 will include both subtypes.    make sure you're not on an old version  
 of sqlalchemy.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] single table inheritance : query a subtype and its subtypes

2009-02-13 Thread GustaV

Hello!

In a configuration looking like this:
class Parent:
   pass

class Child(Parent):
   pass

class ChildChild(Child):
   pass

I would like to query the Childs and all classes that inherits it
(here: ChildChild), but not Parent instances.
session.query(Child).all() returns Child type only ( SQL : WHERE type
IN (2) ), and I'd like to avoid to manually list the classes that
inherit Child class.

Thanks

GustaV
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: autoflush during instanciation

2009-01-29 Thread GustaV

I quite sure I'm not using it...

Look at that code that reproduce the bug
http://utilitybase.com/paste/11481
The last line (p2 = Parent(p)) leads to a load of the children, so
an autoflush. I have a failure with 0.5.2:
sqlalchemy.exc.IntegrityError: (IntegrityError) parent.dumb may not
be NULL u'INSERT INTO parent (dumb) VALUES (?)' [None]
which let me think the object was somehow added to the autoflush...

I hope that will help to solve this.

GustaV
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] autoflush during instanciation

2009-01-28 Thread GustaV

Hi all,

In the __init__ method of a mapper, the load of a relation may lead to
an autoflush operation. When it happens, the object being instanciated
is already in the session and so INSERTed in the flush, whereas it is
not initialised completly (still in __init__).

It may throw exceptions for NOT NULL column, etc.

Is it a normal behavior? I though a Session.add() should be called
explicitly to add the object to the session. Do the object shouldn't
be added after the __init__ call anyway?

Thanks a lot

GustaV
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: autoflush during instanciation

2009-01-28 Thread GustaV

I'm using Declarative extension actually.

In the pylons framework, the session initialise like this
def init_model(engine):
Call me before using any of the tables or classes in the
model
sm = orm.sessionmaker(autoflush=True, autocommit=False,
bind=engine)

meta.engine = engine
meta.Session = orm.scoped_session(sm)

On 28 jan, 16:46, Michael Bayer zzz...@gmail.com wrote:
 On Jan 28, 10:45 am, GustaV buisson.guilla...@gmail.com wrote:

  Hi all,

  In the __init__ method of a mapper, the load of a relation may lead to
  an autoflush operation. When it happens, the object being instanciated
  is already in the session and so INSERTed in the flush, whereas it is
  not initialised completly (still in __init__).

 only if you're using Session.mapper, which I wouldn't.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] cascade delete-orphan

2009-01-15 Thread GustaV

Hi all,
I try to set up a many-to-many relation with an association object.

But I want something not usual: I want the child object deleted when
not owned by any parent anymore.
This is for a messages/recipients relation: the message is useless
when everybody removed it from its mailbox!

I tried that, but it doesn't work:

class Parent(meta.DeclarativeBase):
id = Column(types.Integer, primary_key=True)

class Child(meta.DeclarativeBase):
id = Column(types.Integer, primary_key=True)

class Assoc(meta.DeclarativeBase):
p_id = Column(types.Integer,
  ForeignKey(Parent.id))
c_id = Column(types.Integer,
  ForeignKey(Parent.id))

parent = relation(Parent,
  backref=backref('children',
cascade='all, delete-orphan'))
child = relation(Child,
 backref='parents',
 cascade='delete-orphan')

I expect child = relation(Child, backref='parents', cascade='delete-
orphan') to forward deletes to child when it is an orphan. But it
looks like it forward the delete even if it is not an orphan yet...

It that configuration:

   p1 = Parent()
   p2 = Parent()
   c = Child()
   assoc1 = Assoc(parent=p1, child=c)
   assoc2 = Assoc(parent=p2, child=c)

p1.children = [ ] will lead to:
- delete assoc1 (ok)
- delete c (not ok)
- update assoc2.c_id = null (not ok)

So why is it not really a delete-orphan? :)

Thanks

GustaV
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cascade delete-orphan

2009-01-15 Thread GustaV

Is it planned for 0.6? or earlier?

Anyway thanks: I stop trying all the combinations right now! :)

On 15 jan, 18:30, Michael Bayer mike...@zzzcomputing.com wrote:
 There's various error conditions we should have added in 0.5 but have
 not.   We should look into raising warnings in 0.5 and raising errors
 in 0.6.  The conditions are, using delete-orphan without delete,
 and using delete-orphan with secondary.   Neither is supported at
 this time.

 On Jan 15, 2009, at 11:30 AM, GustaV wrote:



  Hi all,
  I try to set up a many-to-many relation with an association object.

  But I want something not usual: I want the child object deleted when
  not owned by any parent anymore.
  This is for a messages/recipients relation: the message is useless
  when everybody removed it from its mailbox!

  I tried that, but it doesn't work:

  class Parent(meta.DeclarativeBase):
 id = Column(types.Integer, primary_key=True)

  class Child(meta.DeclarativeBase):
 id = Column(types.Integer, primary_key=True)

  class Assoc(meta.DeclarativeBase):
 p_id = Column(types.Integer,
   ForeignKey(Parent.id))
 c_id = Column(types.Integer,
   ForeignKey(Parent.id))

 parent = relation(Parent,
   backref=backref('children',
 cascade='all, delete-orphan'))
 child = relation(Child,
  backref='parents',
  cascade='delete-orphan')

  I expect child = relation(Child, backref='parents', cascade='delete-
  orphan') to forward deletes to child when it is an orphan. But it
  looks like it forward the delete even if it is not an orphan yet...

  It that configuration:

p1 = Parent()
p2 = Parent()
c = Child()
assoc1 = Assoc(parent=p1, child=c)
assoc2 = Assoc(parent=p2, child=c)

  p1.children = [ ] will lead to:
  - delete assoc1 (ok)
  - delete c (not ok)
  - update assoc2.c_id = null (not ok)

  So why is it not really a delete-orphan? :)

  Thanks

  GustaV
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cascade delete-orphan

2009-01-15 Thread GustaV

What is the best way to achieve the auto delete of the child then
(refering to the example earlier)? I think about using an attribute
extension...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Circular Dependency on arbitrary foreign key and backref

2008-10-30 Thread GustaV

Hi everybody.

I'm stuck in something bad:
I use a relation with a arbitrary foreign key + backref (the reason is
that I use single table inheritance, and the relation happen only on
one of the subclasses, so I can't set the foreign key on the
table...). But I get quickly a circular dependency!

I don't really get why, so I wrote a piece of code to reproduce it and
post it here!
I hope someone got an explanation/solution! :)

Cheers!
ps: using the latest revision.
pps : I put the relation on the child side, but the same happens when
on the parent side.


from zope.sqlalchemy import ZopeTransactionExtension
from sqlalchemy.orm import scoped_session, sessionmaker, eagerload
from sqlalchemy import MetaData
from sqlalchemy.orm.interfaces import SessionExtension
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Table, Column, types
from sqlalchemy import ForeignKey, UniqueConstraint
from sqlalchemy.orm import relation, backref, synonym
from sqlalchemy import select, func


maker = sessionmaker(autoflush=True, autocommit=False,
 extension=[ZopeTransactionExtension()])
DBSession = scoped_session(maker)
DeclarativeBase = declarative_base()

class Parent(DeclarativeBase):
__tablename__   = 'parent'
id  = Column('id', types.Integer, primary_key=True,
nullable=False)

class Child(DeclarativeBase):
__tablename__   = 'child'
id  = Column('id', types.Integer, primary_key=True,
nullable=False)
id_parent   = Column('id_parent', types.Integer,
nullable=True)
parent  = relation(Parent,
   primaryjoin=id_parent==Parent.id,
   foreign_keys=[Parent.id],
   backref=backref('children',
   foreign_keys=[id_parent],
   cascade='all, delete-orphan',
   passive_deletes=False))

p = Parent()
c = Child()
p.children.append(c)
DBSession.add(p)
DBSession.flush()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: child counting + delete

2008-10-27 Thread GustaV

Ok! My mistake! I completely made it now.
Thanks again.

On 27 oct, 22:08, Michael Bayer [EMAIL PROTECTED] wrote:
 Just as a note, if you're using ON DELETE CASCADE with
 passive_deletes=True, SQLA will still explicitly delete whatever
 objects might already be loaded into collections, so that the Session
 stays in sync with what's currently in it.  the CASCADE then occurs
 after this when SQLA deletes the parent row.

 On Oct 27, 2008, at 4:57 PM, GustaV wrote:



  The reason I ask that is that it is not really safe to use both DB
  cascades and SA delete management.

  The time comes quite fast when you don't know if an object is deleted
  by SA or by the DB. So you don't know if you got into after_flush or
  not in any case.
  So : is there a true speed penalty to do it completely on the SA side?

  On 27 oct, 21:23, GustaV [EMAIL PROTECTED] wrote:
  Ok it looks good...

  I assume it also means I can't rely on ON DELETE CASCADE anymore, and
  I then must use passive_deletes=False.
  At least on those relations.

  In a general point of view, using MySQL InnoDB tables, is ON DELETE
  CASCADE more efficient than SA stuff?

  Thanks a lot for your help

  On 23 oct, 04:00, Michael Bayer [EMAIL PROTECTED] wrote:

  I do this kind of thing...theres lots of variants on how to do this
  but the idea of __after_flush__ is that everything, including
  collections, have been committed (but the new, dirty and deleted
  lists
  have not been reset yet):

  class OnFlushExt(orm.SessionExtension):
  def after_flush(self, session, flush_context):
  for obj in list(session.new) + list(session.dirty) +
  list(session.deleted):
  if hasattr(obj, '__after_flush__'):
  obj.__after_flush__(session.connection())

  class MyObject(object):
  def __after_flush__(self, connection):
  connection.execute(
mytable.update().where(mytable.c.id==self.id).\
 values(object_count=

  select
  ([func
  .count
  (child_table.c.id)]).where(child_table.c.parent_id==mytable.c.id)
 )
   )

  you could also batch all the updates into a single UPDATE statement
  within after_flush(), that would be a lot more efficient (i.e. where
  mytable.c.id.in_([all the ids of the changed parent objects]) ).

  On Oct 22, 2008, at 5:41 PM, GustaV wrote:

  Ok... I'm not sure to understand the way you do it...

  Does it mean I should check the add and delete of regions in the
  transaction in after_flush() and issue sql directly (not using the
  ORM) to update the count in Country?

  On 22 oct, 20:46, Michael Bayer [EMAIL PROTECTED] wrote:
  On Oct 22, 2008, at 2:05 PM, GustaV wrote:

  Hi all!

  In a 1-N relation between a country and its regions, I'm using an
  attribute extension to update the current count of regions into
  countries. It works very well when I append or remove regions
  from
  country.
  But if I delete one of the region directly
  (session.delete(region)),
  the country doesn't know it has lost one...

  I tried to use a MapperExtension.before_delete to manually remove
  the
  said region from the country, but it is not marked as dirty and
  then
  not updated...

  Any way to do it properly?

  for a one-to-many relation, its often easy enough to just have a
  cascade rule from country-region such that region is deleted
  automatically when removed from the parent.   This is the
  typical way
  to go about deletions from relations, since session.delete()
  does not
  cascade backwards to all owning collections.

  although when I deal with columns that count something that is
  elsewhere represented in the database, I often issue these using
  SQL
  within a SessionExtension.after_flush().  This removes the need to
  worry about catching attribute events and just directly sets the
  correct value based on the state of the transaction post-flush.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] child counting + delete

2008-10-22 Thread GustaV

Hi all!

In a 1-N relation between a country and its regions, I'm using an
attribute extension to update the current count of regions into
countries. It works very well when I append or remove regions from
country.
But if I delete one of the region directly (session.delete(region)),
the country doesn't know it has lost one...

I tried to use a MapperExtension.before_delete to manually remove the
said region from the country, but it is not marked as dirty and then
not updated...

Any way to do it properly?

Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: child counting + delete

2008-10-22 Thread GustaV

Ok... I'm not sure to understand the way you do it...

Does it mean I should check the add and delete of regions in the
transaction in after_flush() and issue sql directly (not using the
ORM) to update the count in Country?

On 22 oct, 20:46, Michael Bayer [EMAIL PROTECTED] wrote:
 On Oct 22, 2008, at 2:05 PM, GustaV wrote:



  Hi all!

  In a 1-N relation between a country and its regions, I'm using an
  attribute extension to update the current count of regions into
  countries. It works very well when I append or remove regions from
  country.
  But if I delete one of the region directly (session.delete(region)),
  the country doesn't know it has lost one...

  I tried to use a MapperExtension.before_delete to manually remove the
  said region from the country, but it is not marked as dirty and then
  not updated...

  Any way to do it properly?

 for a one-to-many relation, its often easy enough to just have a
 cascade rule from country-region such that region is deleted
 automatically when removed from the parent.   This is the typical way
 to go about deletions from relations, since session.delete() does not
 cascade backwards to all owning collections.

 although when I deal with columns that count something that is
 elsewhere represented in the database, I often issue these using SQL
 within a SessionExtension.after_flush().  This removes the need to
 worry about catching attribute events and just directly sets the
 correct value based on the state of the transaction post-flush.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] InnoDB - Foreign Key must be an Index

2008-10-01 Thread GustaV

Hi all,

I'm experiencing an issue on MySQL (5.0.51a) when sqlalchemy create
the tables with foreign keys.

The SQL issued :
CREATE TABLE `referenced` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY
) TYPE = InnoDB;

CREATE TABLE `referencing` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`f` INT NOT NULL,
FOREIGN KEY(f) REFERENCES referenced(id)
) TYPE = InnoDB;

I got an error (#1005 - Can't create table './seed-online/
referencing.frm' (errno: 150) )... I solve this when I specify
explicitly the foreign key being an index (looks like it is the
normal way to do this)

CREATE TABLE `referencing` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`f` INT NOT NULL,
INDEX(f),
FOREIGN KEY(f) REFERENCES referenced(id)
) TYPE = InnoDB;

But I don't know how to tell sqlalchemy to explicitely set that index.
On the other hand, at home (using wampserver2.0c, same version of
MySQL) it works : it looks like the index is set automatically if not
already set (I red this in the MySQL docs).

So I'm looking for either :
- an option to tell sqlalchemy to set the index explicitely
- or, the option in MySQL to turn 'on' to have this INDEX
automatically!

Thanks a lot!

Guillaume

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: InnoDB - Foreign Key must be an Index

2008-10-01 Thread GustaV

Actually, like I said, I tried it on my local config and it worked ok
as well! That's why I suspect some MySQL configuration issue but...
The only thing I have noticed is the default table format :
- InnoDB at home, it works
- MyISAM on the server, it fails (it creates InnoDB tables because I
ask him to do it in sqlalchemy)
But I don't really see why it would fail.

Posting my sqlalchemy code wouldn't be helpful (and it would be a
mess! :) ) because it is really a MySQL error... :(

On Oct 1, 3:27 pm, Alex K [EMAIL PROTECTED] wrote:
 Hi Guillaume,

 The issue that you've faced looks strange - I've just tried to execute
 your first example causing the error on my 5.0.45 mysql server and
 tables were created ok. Can you post the code snippet causing the
 error?

 Regards,
 Alex

 On 1 окт, 16:59, GustaV [EMAIL PROTECTED] wrote:

  Hi all,

  I'm experiencing an issue on MySQL (5.0.51a) when sqlalchemy create
  the tables with foreign keys.

  The SQL issued :
  CREATE TABLE `referenced` (
  `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY
  ) TYPE = InnoDB;

  CREATE TABLE `referencing` (
  `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
  `f` INT NOT NULL,
  FOREIGN KEY(f) REFERENCES referenced(id)
  ) TYPE = InnoDB;

  I got an error (#1005 - Can't create table './seed-online/
  referencing.frm' (errno: 150) )... I solve this when I specify
  explicitly the foreign key being an index (looks like it is the
  normal way to do this)

  CREATE TABLE `referencing` (
  `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
  `f` INT NOT NULL,
  INDEX(f),
  FOREIGN KEY(f) REFERENCES referenced(id)
  ) TYPE = InnoDB;

  But I don't know how to tell sqlalchemy to explicitely set that index.
  On the other hand, at home (using wampserver2.0c, same version of
  MySQL) it works : it looks like the index is set automatically if not
  already set (I red this in the MySQL docs).

  So I'm looking for either :
  - an option to tell sqlalchemy to set the index explicitely
  - or, the option in MySQL to turn 'on' to have this INDEX
  automatically!

  Thanks a lot!

  Guillaume
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adjacency List + Alternate Join Conditions == ???

2008-09-14 Thread GustaV

I was thinking about something like:
session.reload( [tile0, tile1, tile2], 'neighbors' ) for example.
That feature would be great because you don't necessary know what
relation you will need later when you do the first query (eagerload is
not enough)

or in the case:
class Tile(object):
 @property
 def neighbors(self):
 return
object_session(self).query(Tile).filter(Tile.idself.id).all()
have the possibility to query neighbors for many objects at a time.




On Sep 12, 4:22 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Sep 12, 2008, at 4:08 AM, GustaV wrote:



  The main reason why I wan't to use relations is eagerloading, because
  it is the only way, as far as I can see, to retrieve data from DB from
  several objects in one request.

 its not the only way.  You load as many kinds of objects from one
 Query as you want, and you can return them separately or route any
 JOIN of your choosing into the collection.  Its equally possible in
 0.5 as well as 0.4.   0.5 would be:

 TileAlias = aliased(Tile)
 sess.query(Tile).join((TileAlias, Tile.id 
 TileAlias.id)).options(contains_eager(Tile.neighbors, alias=TileAlias))

 and you can of course get them separately as:

 sess.query(Tile, TileAlias).join((TileAlias, Tile.id  TileAlias.id))

 in 0.4, youd replace the query.join() with
 query.select_from(join(tiles, tiles_alias, tiles.c.id 
 tiles_alias.c.id)).

  Maybe today that demand doesn't make any sense, but last time I used
  DB, it was much more efficient to issue 1 big request rather than 50
  small ones.

 it does make sense, and its a central tenet of SQLAlchemy.  The above
 methods are all in the docs.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] declarative_base and UNIQUE Constraint

2008-09-14 Thread GustaV

How do I create a unique constraint with the declarative plugin
(latest version 0.5) ?

both:
__table_args__ = ( UniqueConstraint('region.x', 'region.y'),
{'mysql_engine':'InnoDB'} )
__table_args__ = ( UniqueConstraint(x, y), {'mysql_engine':'InnoDB'} )
don't work.

Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adjacency List + Alternate Join Conditions == ???

2008-09-12 Thread GustaV

The main reason why I wan't to use relations is eagerloading, because
it is the only way, as far as I can see, to retrieve data from DB from
several objects in one request.
I don't wan't each of my 50 objects requests the DB to fill its own
neighbors; but fill them all in one request. Eagerloading does that
that's why I want relations...

Maybe today that demand doesn't make any sense, but last time I used
DB, it was much more efficient to issue 1 big request rather than 50
small ones.


On Sep 11, 8:24 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Sep 11, 2008, at 2:04 PM, GustaV wrote:





  Ok, another thing on the subject:

  It looks like that does not work before a commit. Even a flush doesn't
  help:

  t1 = Tile(id=1)
  t2 = Tile(id=2)
  t3 = Tile(id=3)
  t4 = Tile(id=4)
  session.add_all([t1, t2, t3, t4])
  session.flush()
  assert t2.neighbors == [t1]

  FAIL

  I'd really like to use it before even a flush! :)

 well the flush is needed since you're making use of the database to  
 calculate what members are part of the collection.

 Assuming you haven't already accessed t2.neighbors, it should  
 lazyload the items the first time you hit it.   Otherwise you could  
 just say Session.expire(t2, [neighbors]).  Other expiry methods  
 apply, i.e. Session.expire_all(), Session.expire(t2), Session.commit()  
 etc.

 Another option here is to do away with relation() altogether.   This  
 would greatly simplify the whole thing:

 class Tile(object):
     [EMAIL PROTECTED]
      def neighbors(self):
          return  
 object_session(self).query(Tile).filter(Tile.idself.id).all()
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adjacency List + Alternate Join Conditions == ???

2008-09-11 Thread GustaV

Ok, another thing on the subject:

It looks like that does not work before a commit. Even a flush doesn't
help:

t1 = Tile(id=1)
t2 = Tile(id=2)
t3 = Tile(id=3)
t4 = Tile(id=4)
session.add_all([t1, t2, t3, t4])
session.flush()
assert t2.neighbors == [t1]

FAIL

I'd really like to use it before even a flush! :)


On Sep 10, 3:29 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Sep 9, 2008, at 7:27 PM, GustaV wrote:



  t4 = session.query(Tile).get(4)
  print t4.neighbors[0]._data

  will print None

 you have to adjust the other mapping appropriately:

 related_tiles = mapper(Tile, tile_alias, non_primary=True, properties={
  '_data':tile_alias.c.data

 })
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Adjacency List + Alternate Join Conditions == ???

2008-09-09 Thread GustaV

Hello,

I'm looking into something a bit tricky : having a relation between a
table and itself (an adjacency list then) with a non-trivial join
condition.
Let's say for example, in the case of a Checkers game board for
example, to query the neighboring tiles of a given tile. To do the
query itself is easy, but then it can't be eagerloaded which is a
shame...

I tried something like (with a stupid condition) :
table_alias = mytable.alias()
mapper(Tile, mytable, properties={
 'neighbors' = relation(Tile, primaryjoin=mytable.c.x 
table_alias.c.x,
   foreign_keys=[mytable.c.x],
   viewonly=True)
})

but that does not do what I could expect : it queries both tables
(normal and aliased) but returns result from the non-alias table
(without eagerloading) or fails (with eagerload) because it uses
incoherent aliased tables.

The problem comes from we have no access to the table being
automatically aliased when doing an adjacency relation here ; and
building a new aliased table is a non-sense.

In one word : HELP !

Gus
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adjacency List + Alternate Join Conditions == ???

2008-09-09 Thread GustaV

That works great!

It revealed a bug though :
When there are columns prefixed by a '_' in the mapper but not in the
DB (quite usual), they are not properly loaded using the 'neighbors'
property.

In your example :

from sqlalchemy import *
from sqlalchemy.orm import *
engine = create_engine('sqlite://', echo=True)
Session = scoped_session(sessionmaker(bind=engine))

from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()

class Tile(Base):
 __tablename__ = tiles
 id = Column(Integer, primary_key=True)
 _data = Column('data', Integer)

Base.metadata.create_all(engine)

tile_table = Tile.__table__
tile_alias = tile_table.alias()
related_tiles = mapper(Tile, tile_alias, non_primary=True)

Tile.neighbors = relation(related_tiles, primaryjoin=tile_table.c.id

tile_alias.c.id, foreign_keys=[tile_alias.c.id], viewonly=True)

session = sessionmaker(bind=engine)()

t1 = Tile(id=1, _data=1)
t2 = Tile(id=2, _data=2)
t3 = Tile(id=3, _data=3)
t4 = Tile(id=4, _data=3)

session.add_all([t1, t2, t3, t4])
session.commit()

session.expunge(t1)
session.expunge(t2)
session.expunge(t3)
session.expunge(t4)

t4 = session.query(Tile).get(4)
print t4.neighbors[0]._data


will print None



On Sep 9, 11:12 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 you can manage this by making SQLAlchemy fully aware of table_alias
 by creating a separate mapper.  Since you are viewonly=True you have
 more flexiblity in this regard:

 from sqlalchemy import *
 from sqlalchemy.orm import *
 engine = create_engine('sqlite://', echo=True)
 Session = scoped_session(sessionmaker(bind=engine))

 from sqlalchemy.ext.declarative import declarative_base
 Base = declarative_base()

 class Tile(Base):
  __tablename__ = tiles
  id = Column(Integer, primary_key=True)

 Base.metadata.create_all(engine)

 tile_table = Tile.__table__
 tile_alias = tile_table.alias()
 related_tiles = mapper(Tile, tile_alias, non_primary=True)

 Tile.neighbors = relation(related_tiles, primaryjoin=tile_table.c.id 
 tile_alias.c.id, foreign_keys=[tile_alias.c.id], viewonly=True)

 session = sessionmaker(bind=engine)()

 t1 = Tile(id=1)
 t2 = Tile(id=2)
 t3 = Tile(id=3)
 t4 = Tile(id=4)

 session.add_all([t1, t2, t3, t4])
 session.commit()

 assert t2.neighbors == [t1]
 assert t4.neighbors == [t1, t2, t3]
 assert t1.neighbors == []

 On Sep 9, 2008, at 4:50 PM, GustaV wrote:



  Hello,

  I'm looking into something a bit tricky : having a relation between a
  table and itself (an adjacency list then) with a non-trivial join
  condition.
  Let's say for example, in the case of a Checkers game board for
  example, to query the neighboring tiles of a given tile. To do the
  query itself is easy, but then it can't be eagerloaded which is a
  shame...

  I tried something like (with a stupid condition) :
  table_alias = mytable.alias()
  mapper(Tile, mytable, properties={
  'neighbors' = relation(Tile, primaryjoin=mytable.c.x 
  table_alias.c.x,
foreign_keys=[mytable.c.x],
viewonly=True)
  })

  but that does not do what I could expect : it queries both tables
  (normal and aliased) but returns result from the non-alias table
  (without eagerloading) or fails (with eagerload) because it uses
  incoherent aliased tables.

  The problem comes from we have no access to the table being
  automatically aliased when doing an adjacency relation here ; and
  building a new aliased table is a non-sense.

  In one word : HELP !

  Gus
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Extension Mapper

2008-08-30 Thread GustaV

It almost works.
A small problem though : when I get and modify instances (a priori not
loaded before the flush) in after_flush method, they are correctly
added in the dirty list of the session but the 2nd flush does nothing.
This is because the identity_map is still flagged as 'not modified'

On 29 août, 18:32, Michael Bayer [EMAIL PROTECTED] wrote:
 in r5069, extension can be a list of SessionExtension objects.  You
 can also append to session.extensions.

 On Aug 29, 2008, at 11:36 AM, GustaV wrote:



  You must be right.
  Of course, Turbogears2 already add an extension to the session, and it
  looks like it is not a list of extension anyway (or maybe in the
  latest trunk?).
  What the best way then? To subclass the tg2 extension with mine and
  continue to call overloaded method from mine? Anything better?

  On Aug 29, 3:57 am, Michael Bayer [EMAIL PROTECTED] wrote:
  On Aug 28, 2008, at 6:57 PM, GustaV wrote:

  Hi all.
  I'm currently working on a map (like in geography :) )

  When a new tile in inserted in the DB, I'm using an extension mapper
  to update some neighbor's properties (like the neighbors count). The
  after_insert method helps a lot... but:
  When I modify another object than the one being inserted in the
  after_insert method, the modification happens in the python object,
  but doesn't occur is the DB. The commit at the end does not seem to
  have an effect.

  What should I do?

  modifications to objects inside of flush() aren't going to propigate
  the same way as when they're outside of the flush().   Within
  MapperExtension you should generally just do things with the
  connection (i.e., issue SQL directly).

  Otherwise, we have SessionExtension which has a before_flush() hook,
  and you can poke around inside the Session and change things freely
  before anything flush()-related occurs.  The catch there is that
  you'd
  probably want to be using the latest 0.5 trunk for that (post beta3)
  since we've fixed it up a bit to work in a more useful way.   I find
  that using before_flush() and after_flush() is generally a better way
  to go for dependent changes/SQL to be issued since you aren't doing
  things inside of the flush() itself, where its hard to predict when
  things will actually happen.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Extension Mapper

2008-08-30 Thread GustaV

I planned to do it on the next flush as you said, but since it is not
very clean, I manage to do it in the before_flush method.

So, last message on that subject I think:
Since 'dirty' var is set before the call of 'before_flush' ( dirty =
self._dirty_states ); it doesn't take modification that occurs in
there into account. Maybe it should be re-set just after the call,
like the new and deleted variables.

Thanks a lot for your help! :)

Gus


On 30 août, 16:47, Michael Bayer [EMAIL PROTECTED] wrote:
 use before_flush() for changes to the dirty list and suchor if
 you really want things set up for the *next* flush, use
 after_flush_postexec().

 On Aug 30, 2008, at 10:08 AM, GustaV wrote:



  It almost works.
  A small problem though : when I get and modify instances (a priori not
  loaded before the flush) in after_flush method, they are correctly
  added in the dirty list of the session but the 2nd flush does nothing.
  This is because the identity_map is still flagged as 'not modified'

  On 29 août, 18:32, Michael Bayer [EMAIL PROTECTED] wrote:
  in r5069, extension can be a list of SessionExtension objects.  You
  can also append to session.extensions.

  On Aug 29, 2008, at 11:36 AM, GustaV wrote:

  You must be right.
  Of course, Turbogears2 already add an extension to the session,
  and it
  looks like it is not a list of extension anyway (or maybe in the
  latest trunk?).
  What the best way then? To subclass the tg2 extension with mine and
  continue to call overloaded method from mine? Anything better?

  On Aug 29, 3:57 am, Michael Bayer [EMAIL PROTECTED] wrote:
  On Aug 28, 2008, at 6:57 PM, GustaV wrote:

  Hi all.
  I'm currently working on a map (like in geography :) )

  When a new tile in inserted in the DB, I'm using an extension
  mapper
  to update some neighbor's properties (like the neighbors count).
  The
  after_insert method helps a lot... but:
  When I modify another object than the one being inserted in the
  after_insert method, the modification happens in the python
  object,
  but doesn't occur is the DB. The commit at the end does not seem
  to
  have an effect.

  What should I do?

  modifications to objects inside of flush() aren't going to
  propigate
  the same way as when they're outside of the flush().   Within
  MapperExtension you should generally just do things with the
  connection (i.e., issue SQL directly).

  Otherwise, we have SessionExtension which has a before_flush()
  hook,
  and you can poke around inside the Session and change things freely
  before anything flush()-related occurs.  The catch there is that
  you'd
  probably want to be using the latest 0.5 trunk for that (post
  beta3)
  since we've fixed it up a bit to work in a more useful way.   I
  find
  that using before_flush() and after_flush() is generally a better
  way
  to go for dependent changes/SQL to be issued since you aren't doing
  things inside of the flush() itself, where its hard to predict when
  things will actually happen.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Extension Mapper

2008-08-29 Thread GustaV


You must be right.
Of course, Turbogears2 already add an extension to the session, and it
looks like it is not a list of extension anyway (or maybe in the
latest trunk?).
What the best way then? To subclass the tg2 extension with mine and
continue to call overloaded method from mine? Anything better?

On Aug 29, 3:57 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Aug 28, 2008, at 6:57 PM, GustaV wrote:



  Hi all.
  I'm currently working on a map (like in geography :) )

  When a new tile in inserted in the DB, I'm using an extension mapper
  to update some neighbor's properties (like the neighbors count). The
  after_insert method helps a lot... but:
  When I modify another object than the one being inserted in the
  after_insert method, the modification happens in the python object,
  but doesn't occur is the DB. The commit at the end does not seem to
  have an effect.

  What should I do?

 modifications to objects inside of flush() aren't going to propigate  
 the same way as when they're outside of the flush().   Within  
 MapperExtension you should generally just do things with the  
 connection (i.e., issue SQL directly).

 Otherwise, we have SessionExtension which has a before_flush() hook,  
 and you can poke around inside the Session and change things freely  
 before anything flush()-related occurs.  The catch there is that you'd  
 probably want to be using the latest 0.5 trunk for that (post beta3)  
 since we've fixed it up a bit to work in a more useful way.   I find  
 that using before_flush() and after_flush() is generally a better way  
 to go for dependent changes/SQL to be issued since you aren't doing  
 things inside of the flush() itself, where its hard to predict when  
 things will actually happen.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Extension Mapper

2008-08-28 Thread GustaV

Hi all.
I'm currently working on a map (like in geography :) )

When a new tile in inserted in the DB, I'm using an extension mapper
to update some neighbor's properties (like the neighbors count). The
after_insert method helps a lot... but:
When I modify another object than the one being inserted in the
after_insert method, the modification happens in the python object,
but doesn't occur is the DB. The commit at the end does not seem to
have an effect.

What should I do?

Thanks

Guillaume
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Random result

2008-08-23 Thread GustaV

Hi all,
Really, you do an amazing job on that stuff, it's a pleasure to work
with!

A short question though. It's possible to get a random order on
selects in mysql using the RAND() in parameter of ORDER BY. I know
similar (but different) was possible on others.
What about sqlalchemy?
The goal is to have only one result on the query, randomly.

Thanks again

Gus
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Random result

2008-08-23 Thread GustaV

Ok, I wanted to know if there was a way to do it with no database
specific code.

Then you are right :
I'll first query the result count (unknown a priori) and then use a
random python generated offset.

If anyone has a better idea (in 1 request only), that would be great!

On 23 août, 18:34, Michael Bayer [EMAIL PROTECTED] wrote:
 On Aug 23, 2008, at 12:10 PM, GustaV wrote:



  Hi all,
  Really, you do an amazing job on that stuff, it's a pleasure to work
  with!

  A short question though. It's possible to get a random order on
  selects in mysql using the RAND() in parameter of ORDER BY. I know
  similar (but different) was possible on others.
  What about sqlalchemy?
  The goal is to have only one result on the query, randomly.

 if the function is RAND(), you'd just say
 select.order_by(func.rand()).If you're looking for just the first
 result then you'd use limit/offset (the select() and Query()
 constructs both support limit() and offset() methods for this).

 I'm not familiar with the random function of other databases but the
 same techniques apply.   SQLA could also could also support with a
 generic version of this function which calls the correct rand()
 function on each database, but only if very similar choices exist on
 most backends.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Listening append() and remove()

2008-08-21 Thread GustaV

Hi all!
I'm looking for an easy way to get informed when someone append or
remove an object from a specific one-to-many relation. For example to
keep a count on the parent object, with no need to query the children.
There are a lot of stuff about it, but I'm not sure how to do it
really:

- 1st :
http://markmail.org/message/oyt57qx3247jdhgi#query:sqlalchemy%20__sa_instrument_class__+page:1+mid:wz4wkumhvnnm74yw+state:results
Does not seem to work on 0.5

- 2nd : CollectionAdapter.append_with_event method let me think there
is a way to register a callback somewhere, but where?

- 3rd : class MyInstrumentedList(list) with appropriate decorated
methods like @collection.appender. But does it work when a 'clear' is
called?

Did I miss something?

Thanks a lot!
Gus

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---