[sqlalchemy] Re: inserting

2008-11-26 Thread Petr Kobalíček

So, I can use that way in inserting one row, but can't when inserting
multiple rows ? It is correct ?

2008/11/24 Michael Bayer [EMAIL PROTECTED]:

 oh, right.  Column objects only work when you say insert().values(**dict).


 MikeCo wrote:

 Using 0.5.0rc4 doesn't seem to do that. or what am I doing wrong?

 The test, http://pastebin.com/fd0653b0 , looks like when using the
 Column object, the values inserted are all None (test 1). When the key
 is the fully qualified table.column, the value inserted is always
 the default value for the column (test 3). It only works correct when
 the key is the string for the unqualified column name (tests 2 and 4).


 On Nov 24, 10:37 am, Michael Bayer [EMAIL PROTECTED] wrote:
 the actual Column object or its key can be placed in the dict.

 MikeCo wrote:

  Oops, not quite right. str(table.c.colname) returns 'table.colname,
  and that doesn't work right as dictionary key. You need col only as
  dictionary key.

 http://pastebin.com/fd0653b0 has some tests

  Interesting question is does SA intend that table.colname work in
  the dictionary definition?

  --
  Mike

  On Nov 23, 8:58 am, MikeCo [EMAIL PROTECTED] wrote:
  Your dictionary key CartItemTable.c.colname is an instance of class
  Column, The dictionary keys need to be strings. Use str
  (CartItemTable.c.colname) to get the string name of the column and it
  should work.

   CartItemTable.c.userId

  Column('userId', Integer(), ForeignKey('User.userId'),
  table=CartItem, primary_key=True, nullable=False)
  str(CartItemTable.c.userId)

  'CartItem.userId'

  --
  Mike

  On Nov 23, 8:12 am, Petr Kobalíèek [EMAIL PROTECTED]
 wrote:

   Hi devs,

   I don't understand one thing:

   I have table:

   CartItemTable = sql.Table(
 CartItem, meta.metadata,

 # Relations
 sql.Column(userId  , sql.Integer  ,
   sql.ForeignKey(User.userId), nullable=False, primary_key=True),
 sql.Column(productId   , sql.Integer  ,
   sql.ForeignKey(Product.productId), nullable=False,
   primary_key=True),
 sql.Column(variantId   , sql.Integer  ,
 nullable=True,
   default=None),

 # Count of items in shopping cart
 sql.Column(count   , sql.Integer  ,
   nullable=False, default=1)
   )

   and I want to insert multiple rows to it using sql:

 Session().execute(
   CartItemTable.insert(),
   [{
 CartItemTable.c.userId: self.user.userId,
 CartItemTable.c.productId : item.product.productId,
 CartItemTable.c.variantId : vid(item.variant),
 CartItemTable.c.count : item.count
   } for item in self.items]
 )

   But this not works and I must use this way:

 Session().execute(
   CartItemTable.insert(),
   [{
 userId: self.user.userId,
 productId : item.product.productId,
 variantId : vid(item.variant),
 count : item.count
   } for item in self.items]
 )

   Why is not working first syntax, what em I missing ?

   Cheers
   - Petr
 



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] set_shard problems

2008-11-26 Thread Ids

Hello,

I think I have found a bug, but I may be doing something wrong. It
looks like session.query(class).set_shard(shard_id) does not work
and session.connection(shard_id=shard_id).execute does. The first
does not return any result, the second one does (even when executing
the same query).
I've tested it with MySQL 3.23.54 and 5.0.45 and sqlalchemy 0.5.0rc1,
rc2 and rc4.

Here is the test database setup:
CREATE TABLE persons (
id INTEGER NOT NULL AUTO_INCREMENT,
name VARCHAR(20) NOT NULL,
PRIMARY KEY (id),
UNIQUE (name)
);
insert into persons (name) values('bob');
insert into persons (name) values('alice');

Here is the test code:
#!/opt/python-2.4/bin/python
import sys
import logging

import sqlalchemy as sa
from sqlalchemy.orm.shard import ShardedSession
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

logging.basicConfig(stream=sys.stdout, format='%(asctime)s %(levelname)
s %(name)s: %(message)s')
logging.getLogger('sqlalchemy').setLevel(logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)

Session = sessionmaker(class_=ShardedSession)

Base = declarative_base()
class Person(Base):
  __tablename__ = 'persons'
  id = sa.Column(sa.Integer, primary_key=True)
  name = sa.Column(sa.String(20), unique=True, nullable=False)

  def __str__(self):
return 'Person(%s, %s)' % (self.id, self.name)

def shard_chooser(mapper, instance, clause=None):
  raise NotImplementedError

def id_chooser(query, ident):
  raise NotImplementedError

def query_chooser(query):
  raise NotImplementedError

Session.configure(shard_chooser=shard_chooser,
  id_chooser=id_chooser,
  query_chooser=query_chooser)

session = Session()
shard_id='test'
engine = sa.create_engine('mysql://[EMAIL PROTECTED]/%s' % shard_id)
session.bind_shard(shard_id, engine)

q = session.query(Person).set_shard(shard_id).limit(1)
logging.debug(QUERY 1: %s, q)
rows = list(q.all())
logging.debug(QUERY 1 RESULT: %s % rows)

#
# now to it manually:
#
q = '''SELECT persons.id AS persons_id, persons.name AS persons_name
FROM persons
 LIMIT 1
'''
logging.debug(QUERY 2: %s, q)
rows = session.connection(shard_id=shard_id).execute(q)
rows = list(rows)
logging.debug(QUERY 2: RESULT: %s % rows)

And here is the code output:
2008-11-26 10:52:26,043 INFO sqlalchemy.orm.strategies.ColumnLoader:
Person.id register managed attribute
2008-11-26 10:52:26,044 INFO sqlalchemy.orm.strategies.ColumnLoader:
Person.name register managed attribute
2008-11-26 10:52:26,045 DEBUG root: QUERY 1: SELECT persons.id AS
persons_id, persons.name AS persons_name
FROM persons
 LIMIT 1
2008-11-26 10:52:26,061 INFO sqlalchemy.pool.QueuePool.0x...8bf4:
Created new connection _mysql.connection open to 'localhost' at
82b02ec
2008-11-26 10:52:26,062 INFO sqlalchemy.pool.QueuePool.0x...8bf4:
Connection _mysql.connection open to 'localhost' at 82b02ec checked
out from pool
2008-11-26 10:52:26,062 INFO sqlalchemy.engine.base.Engine.0x...8a14:
BEGIN
2008-11-26 10:52:26,060 INFO sqlalchemy.engine.base.Engine.0x...8a14:
SELECT persons.id AS persons_id, persons.name AS persons_name
FROM persons
 LIMIT 1
2008-11-26 10:52:26,064 INFO sqlalchemy.engine.base.Engine.0x...8a14:
[]
2008-11-26 10:52:26,066 DEBUG sqlalchemy.engine.base.Engine.0x...8a14:
Col ('persons_id', 'persons_name')
2008-11-26 10:52:26,070 DEBUG root: QUERY 1 RESULT: []
2008-11-26 10:52:26,070 DEBUG root: QUERY 2: SELECT persons.id AS
persons_id, persons.name AS persons_name
FROM persons
 LIMIT 1

2008-11-26 10:52:26,071 INFO sqlalchemy.engine.base.Engine.0x...8a14:
SELECT persons.id AS persons_id, persons.name AS persons_name
FROM persons
 LIMIT 1

2008-11-26 10:52:26,071 INFO sqlalchemy.engine.base.Engine.0x...8a14:
{}
2008-11-26 10:52:26,073 DEBUG sqlalchemy.engine.base.Engine.0x...8a14:
Col ('persons_id', 'persons_name')
2008-11-26 10:52:26,073 DEBUG sqlalchemy.engine.base.Engine.0x...8a14:
Row (1L, 'bob')
2008-11-26 10:52:26,074 DEBUG root: QUERY 2: RESULT: [(1L, 'bob')]

There are two things I notice in the sqlalchemy.Engine logs; the
second SELECT statement seems to have an additional newline and the
next log (which seem to be the parameters for the select statement)
contain a {} instead of a [].

Am I doing something wrong here or is this supposed to work?

Regards,
Ids
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Selecting from a self-referential mapper: recursive joins?

2008-11-26 Thread az

 All of that dbcook stuff scares me, though I think I can see 
 why you want it.   
heh. your model will look this way:
---
import dbcook.usage.plainwrap as o2r
class Text( o2r.Type): pass

class Itemtype( o2r.Base):
   name = Text()
   inherits = o2r.Association.Hidden( 'Itemtype', backref='progeny')
   own_features = o2r.Collection( 'Feature', backref='root_itemtype')

class Feature( o2r.Base):
   name = Text() 
   dependents = o2r.Association.Hidden( 'Feature',
backref='determinants')

class FeatureDependency( o2r.Base):
dependent = o2r.Reference( Feature,
backref='feature_dependencies_as_dependent')
    determinant = o2r.Reference( Feature,
backref='feature_dependencies_as_determinant')

#eo model... 

#sa-setup
import sqlalchemy,sys
meta = sqlalchemy.MetaData( sqlalchemy.create_engine('sqlite:///', 
   echo= 'echo' in sys.argv ))
# map attr-types to sa-column-types
fieldtypemap = {
Text: dict( type= sqlalchemy.String(100), ),
}

mybuild = o2r.Builder( meta,
   locals(), #just scan anything here that looks like subclass of Base
   fieldtypemap,
   generator =True #how this would look in plain sqlalchemy
)

if mybuild.generator:
print '= generated SA set-up'
print mybuild.generator.out
print '= eo generated SA set-up'

--
thats it. u can run it and see generated sa-tables, sa-mappers etc.
or just go ahead with actual usage. if u use the QueryX replacement, 
u can use the references, collections and associations in same way 
(a.somerel == c), even via filter_by, and use plain python funcs like 
lambda self: self.dependent.name.startswith('a') as .filter()s...
and noone will notice if u change a 1:m or m:1 into m2m somewhere in 
the model.
i still can't get why noone's using it. maybe it's too easy... and 
cannot see the cogwheels.

 I will eventually need to wade into the waters of stricter 
 type checking and conversion on my tables, and that will get me
 into a lot of similar concerns but hopefully not as deeply!
type checking/conversion has nothing to do with dbcook, it's a 
separate layer, which has/needs its own dbcook-reflector.

have fun, and sorry for the spam
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-26 Thread Michael Bayer

Petr Kobalíèek wrote:

 So, I can use that way in inserting one row, but can't when inserting
 multiple rows ? It is correct ?

you can only use string keys as the arguments to the execute() method. 
this applies to one row or many.  columns as keys can be used for the
values argument/generative method on an insert() or update() construct.

to be quite honest there was never an intention for columns as keys to not
work properly when sent as keys as parameters to .execute(), there just
seems to be a lack of testing in this case, so feel free to file a ticket
for this.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-26 Thread MikeCo

You probably don't want to do the inserts one by one because of the
commit overhead, or needing to rollback on failure of on insert. You
can still get multiple inserts in one transaction. Add this to the
example posted at http://pastebin.com/fd0653b0 to see three inserts in
one transaction.

print '', '5. column attributes in .values()'
data = [{MyTable.c.col1:itm.x} for itm in items]
print '', data
trans = conn.begin()#  - - start a transaction to wrap many
inserts
for d in data:
conn.execute(MyTable.insert().values(d))
trans.commit()  #  - - end transaction

Wrapping the insert loop in a transaction should give you the same or
very nearly the same performance as executing the insert many
operation, and only adds 4 or 5 simple lines of code. The exception
might be if you have a large batch process inserting thousands or tens
of thousands of records; in that case, you will need to design a solid
commit strategy anyway. with or without SQLAlchemy.

--
Mike

On Nov 26, 10:42 am, Michael Bayer [EMAIL PROTECTED] wrote:
 Petr Kobalíèek wrote:

  So, I can use that way in inserting one row, but can't when inserting
  multiple rows ? It is correct ?

 you can only use string keys as the arguments to the execute() method.
 this applies to one row or many.  columns as keys can be used for the
 values argument/generative method on an insert() or update() construct.

 to be quite honest there was never an intention for columns as keys to not
 work properly when sent as keys as parameters to .execute(), there just
 seems to be a lack of testing in this case, so feel free to file a ticket
 for this.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-26 Thread MikeCo

Oops, I stand corrected. see http://pastebin.com/fe4a38d6

At least for SQLite, my loop solution is many times slower than the
insert many syntax. I would be curious to see results run against
different database engines. I don't have quick access to them right
now.
Still, unless there are very large numbers of inserts there should be
little, if any, noticeable difference.

--
Mike


On Nov 26, 4:50 pm, MikeCo [EMAIL PROTECTED] wrote:
 You probably don't want to do the inserts one by one because of the
 commit overhead, or needing to rollback on failure of on insert. You
 can still get multiple inserts in one transaction. Add this to the
 example posted athttp://pastebin.com/fd0653b0to see three inserts in
 one transaction.

 print '', '5. column attributes in .values()'
 data = [{MyTable.c.col1:itm.x} for itm in items]
 print '', data
 trans = conn.begin()    #  - - start a transaction to wrap many
 inserts
 for d in data:
     conn.execute(MyTable.insert().values(d))
 trans.commit()  #  - - end transaction

 Wrapping the insert loop in a transaction should give you the same or
 very nearly the same performance as executing the insert many
 operation, and only adds 4 or 5 simple lines of code. The exception
 might be if you have a large batch process inserting thousands or tens
 of thousands of records; in that case, you will need to design a solid
 commit strategy anyway. with or without SQLAlchemy.

 --
 Mike

 On Nov 26, 10:42 am, Michael Bayer [EMAIL PROTECTED] wrote:

  Petr Kobalíèek wrote:

   So, I can use that way in inserting one row, but can't when inserting
   multiple rows ? It is correct ?

  you can only use string keys as the arguments to the execute() method.
  this applies to one row or many.  columns as keys can be used for the
  values argument/generative method on an insert() or update() construct.

  to be quite honest there was never an intention for columns as keys to not
  work properly when sent as keys as parameters to .execute(), there just
  seems to be a lack of testing in this case, so feel free to file a ticket
  for this.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-26 Thread Michael Bayer



executemany() syntax is very efficient and I dont really understand  
how the column/string thing is that much of an issue other than an  
small inconvenience and a slight failure of the API to be  
consistent...all you have to do is convert the dict keys to be  
column.key.

On Nov 26, 2008, at 5:52 PM, MikeCo wrote:


 Oops, I stand corrected. see http://pastebin.com/fe4a38d6

 At least for SQLite, my loop solution is many times slower than the
 insert many syntax. I would be curious to see results run against
 different database engines. I don't have quick access to them right
 now.
 Still, unless there are very large numbers of inserts there should be
 little, if any, noticeable difference.

 --
 Mike


 On Nov 26, 4:50 pm, MikeCo [EMAIL PROTECTED] wrote:
 You probably don't want to do the inserts one by one because of the
 commit overhead, or needing to rollback on failure of on insert. You
 can still get multiple inserts in one transaction. Add this to the
 example posted athttp://pastebin.com/fd0653b0to see three inserts in
 one transaction.

 print '', '5. column attributes in .values()'
 data = [{MyTable.c.col1:itm.x} for itm in items]
 print '', data
 trans = conn.begin()#  - - start a transaction to wrap many
 inserts
 for d in data:
 conn.execute(MyTable.insert().values(d))
 trans.commit()  #  - - end transaction

 Wrapping the insert loop in a transaction should give you the same or
 very nearly the same performance as executing the insert many
 operation, and only adds 4 or 5 simple lines of code. The exception
 might be if you have a large batch process inserting thousands or  
 tens
 of thousands of records; in that case, you will need to design a  
 solid
 commit strategy anyway. with or without SQLAlchemy.

 --
 Mike

 On Nov 26, 10:42 am, Michael Bayer [EMAIL PROTECTED]  
 wrote:

 Petr Kobalíèek wrote:

 So, I can use that way in inserting one row, but can't when  
 inserting
 multiple rows ? It is correct ?

 you can only use string keys as the arguments to the execute()  
 method.
 this applies to one row or many.  columns as keys can be used for  
 the
 values argument/generative method on an insert() or update()  
 construct.

 to be quite honest there was never an intention for columns as  
 keys to not
 work properly when sent as keys as parameters to .execute(), there  
 just
 seems to be a lack of testing in this case, so feel free to file a  
 ticket
 for this.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-26 Thread MikeCo

And that is what we did in our application before this discussion even
started. Don't know what Petr is doing in his.
I think it is more of an interesting, mostly academic, discussion
about alternative techniques; probably a very low priority issue to
the SA code base.


On Nov 26, 5:56 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 executemany() syntax is very efficient and I dont really understand  
 how the column/string thing is that much of an issue other than an  
 small inconvenience and a slight failure of the API to be  
 consistent...all you have to do is convert the dict keys to be  
 column.key.

 On Nov 26, 2008, at 5:52 PM, MikeCo wrote:



  Oops, I stand corrected. seehttp://pastebin.com/fe4a38d6

  At least for SQLite, my loop solution is many times slower than the
  insert many syntax. I would be curious to see results run against
  different database engines. I don't have quick access to them right
  now.
  Still, unless there are very large numbers of inserts there should be
  little, if any, noticeable difference.

  --
  Mike

  On Nov 26, 4:50 pm, MikeCo [EMAIL PROTECTED] wrote:
  You probably don't want to do the inserts one by one because of the
  commit overhead, or needing to rollback on failure of on insert. You
  can still get multiple inserts in one transaction. Add this to the
  example posted athttp://pastebin.com/fd0653b0tosee three inserts in
  one transaction.

  print '', '5. column attributes in .values()'
  data = [{MyTable.c.col1:itm.x} for itm in items]
  print '', data
  trans = conn.begin()    #  - - start a transaction to wrap many
  inserts
  for d in data:
      conn.execute(MyTable.insert().values(d))
  trans.commit()  #  - - end transaction

  Wrapping the insert loop in a transaction should give you the same or
  very nearly the same performance as executing the insert many
  operation, and only adds 4 or 5 simple lines of code. The exception
  might be if you have a large batch process inserting thousands or  
  tens
  of thousands of records; in that case, you will need to design a  
  solid
  commit strategy anyway. with or without SQLAlchemy.

  --
  Mike

  On Nov 26, 10:42 am, Michael Bayer [EMAIL PROTECTED]  
  wrote:

  Petr Kobalíèek wrote:

  So, I can use that way in inserting one row, but can't when  
  inserting
  multiple rows ? It is correct ?

  you can only use string keys as the arguments to the execute()  
  method.
  this applies to one row or many.  columns as keys can be used for  
  the
  values argument/generative method on an insert() or update()  
  construct.

  to be quite honest there was never an intention for columns as  
  keys to not
  work properly when sent as keys as parameters to .execute(), there  
  just
  seems to be a lack of testing in this case, so feel free to file a  
  ticket
  for this.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inserting

2008-11-26 Thread Michael Bayer

insert() has had some inconsistencies being reported as of late  (like  
params() ) that i would like to get nailed down.   a construct like  
this shouldn't have any surprises.


On Nov 26, 2008, at 6:04 PM, MikeCo wrote:


 And that is what we did in our application before this discussion even
 started. Don't know what Petr is doing in his.
 I think it is more of an interesting, mostly academic, discussion
 about alternative techniques; probably a very low priority issue to
 the SA code base.


 On Nov 26, 5:56 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 executemany() syntax is very efficient and I dont really understand
 how the column/string thing is that much of an issue other than an
 small inconvenience and a slight failure of the API to be
 consistent...all you have to do is convert the dict keys to be
 column.key.

 On Nov 26, 2008, at 5:52 PM, MikeCo wrote:



 Oops, I stand corrected. seehttp://pastebin.com/fe4a38d6

 At least for SQLite, my loop solution is many times slower than the
 insert many syntax. I would be curious to see results run against
 different database engines. I don't have quick access to them right
 now.
 Still, unless there are very large numbers of inserts there should  
 be
 little, if any, noticeable difference.

 --
 Mike

 On Nov 26, 4:50 pm, MikeCo [EMAIL PROTECTED] wrote:
 You probably don't want to do the inserts one by one because of the
 commit overhead, or needing to rollback on failure of on insert.  
 You
 can still get multiple inserts in one transaction. Add this to the
 example posted athttp://pastebin.com/fd0653b0tosee three inserts in
 one transaction.

 print '', '5. column attributes in .values()'
 data = [{MyTable.c.col1:itm.x} for itm in items]
 print '', data
 trans = conn.begin()#  - - start a transaction to wrap many
 inserts
 for d in data:
 conn.execute(MyTable.insert().values(d))
 trans.commit()  #  - - end transaction

 Wrapping the insert loop in a transaction should give you the  
 same or
 very nearly the same performance as executing the insert many
 operation, and only adds 4 or 5 simple lines of code. The exception
 might be if you have a large batch process inserting thousands or
 tens
 of thousands of records; in that case, you will need to design a
 solid
 commit strategy anyway. with or without SQLAlchemy.

 --
 Mike

 On Nov 26, 10:42 am, Michael Bayer [EMAIL PROTECTED]
 wrote:

 Petr Kobalíèek wrote:

 So, I can use that way in inserting one row, but can't when
 inserting
 multiple rows ? It is correct ?

 you can only use string keys as the arguments to the execute()
 method.
 this applies to one row or many.  columns as keys can be used for
 the
 values argument/generative method on an insert() or update()
 construct.

 to be quite honest there was never an intention for columns as
 keys to not
 work properly when sent as keys as parameters to .execute(), there
 just
 seems to be a lack of testing in this case, so feel free to file a
 ticket
 for this.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Deleting records in a MapperExtension on after_delete

2008-11-26 Thread David Harrison

Hey all,

I've got a situation where I have 2 object A and B, and a third object
C that has a foreign key reference to both A and B.  I can have many
C's that map to the same A.

Now I've implemented a MapperExtension for C that has an after_delete
function, and that function checks to see if the A that the deleted C
was mapped to has any other mappings, and if there are no other
mappings left, deletes the A.

Now this works fine if I'm just deleting C's directly, however as soon
as this happens during a cascade delete from some other object D that
happens to have a mapping to C I get the below error - I'm assuming
this is because sqlalchemy has a test condition that doesn't see my
mapper coming, and freaks out when extra rows get nuked.

ConcurrentModificationError: Deleted rowcount 0 does not match number
of objects deleted 4

Help ?

Cheers
Dave

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Deleting records in a MapperExtension on after_delete

2008-11-26 Thread az

i'm not expert on these, but i think u need something like 
cascade='all' on your relation, _instead_ of the mapperExt. check the 
docs about possible settings. the mapperExt fires too late and the 
session flush-plan gets surprised.

On Thursday 27 November 2008 08:15:04 David Harrison wrote:
 Hey all,

 I've got a situation where I have 2 object A and B, and a third
 object C that has a foreign key reference to both A and B.  I can
 have many C's that map to the same A.

 Now I've implemented a MapperExtension for C that has an
 after_delete function, and that function checks to see if the A
 that the deleted C was mapped to has any other mappings, and if
 there are no other mappings left, deletes the A.

 Now this works fine if I'm just deleting C's directly, however as
 soon as this happens during a cascade delete from some other object
 D that happens to have a mapping to C I get the below error - I'm
 assuming this is because sqlalchemy has a test condition that
 doesn't see my mapper coming, and freaks out when extra rows get
 nuked.

 ConcurrentModificationError: Deleted rowcount 0 does not match
 number of objects deleted 4

 Help ?

 Cheers
 Dave

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Deleting records in a MapperExtension on after_delete

2008-11-26 Thread David Harrison

Sorry, I should probably have mentioned that C isn't the only object
that maps A, so a cascade doesn't work.

2008/11/27  [EMAIL PROTECTED]:

 i'm not expert on these, but i think u need something like
 cascade='all' on your relation, _instead_ of the mapperExt. check the
 docs about possible settings. the mapperExt fires too late and the
 session flush-plan gets surprised.

 On Thursday 27 November 2008 08:15:04 David Harrison wrote:
 Hey all,

 I've got a situation where I have 2 object A and B, and a third
 object C that has a foreign key reference to both A and B.  I can
 have many C's that map to the same A.

 Now I've implemented a MapperExtension for C that has an
 after_delete function, and that function checks to see if the A
 that the deleted C was mapped to has any other mappings, and if
 there are no other mappings left, deletes the A.

 Now this works fine if I'm just deleting C's directly, however as
 soon as this happens during a cascade delete from some other object
 D that happens to have a mapping to C I get the below error - I'm
 assuming this is because sqlalchemy has a test condition that
 doesn't see my mapper coming, and freaks out when extra rows get
 nuked.

 ConcurrentModificationError: Deleted rowcount 0 does not match
 number of objects deleted 4

 Help ?

 Cheers
 Dave




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Deleting records in a MapperExtension on after_delete

2008-11-26 Thread David Harrison

So this is actually a follow on from a question I posed quite a while back now:

http://groups.google.com/group/sqlalchemy/browse_thread/thread/4530dd3f5585/eb4638599b02577d?lnk=gstq=Postgres+cascade+error#eb4638599b02577d

So my approach to solving this problem was to use a MapperExtension,
but it's giving me the error that I originally posted in this thread.

I'm re-posting my previous code here for easy reference and testing by
others (with one tiny mod to get rid of the optionparser code I had):

---

#!/usr/bin/env python

import sys
import sqlalchemy as sa
import sqlalchemy.orm


session = sa.orm.scoped_session(
sa.orm.sessionmaker(autoflush=False, transactional=True)
)
mapper = session.mapper
metadata = sa.MetaData()


houseTable = sa.Table(
'house',
metadata,
sa.Column('id', sa.Integer, primary_key=True),
)

ownerTable = sa.Table(
'owner',
metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('house_id', sa.Integer, sa.ForeignKey('house.id')),
)

dogTable = sa.Table(
'dog',
metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('house_id', sa.Integer, sa.ForeignKey('house.id')),
)

friendshipTable = sa.Table(
'friendship',
metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('owner_id', sa.Integer, sa.ForeignKey('owner.id')),
sa.Column('dog_id', sa.Integer, sa.ForeignKey('dog.id')),
)


class House(object): pass
class Owner(object): pass
class Dog(object): pass
class Friendship(object): pass


mapper(
House,
houseTable,
properties = {
owners : sa.orm.relation(
Owner, cascade=delete-orphan
),
dogs : sa.orm.relation(
Dog, cascade=delete-orphan
),
},
)
mapper(
Owner,
ownerTable,
properties = {
friendships : sa.orm.relation(
Friendship, cascade=delete
),
},
)

mapper(
Friendship,
friendshipTable,
properties = {
dog : sa.orm.relation(
Dog, uselist=False, cascade=all, delete-orphan
),
},
)

mapper(Dog, dogTable)


if __name__ == __main__:

engine = sa.create_engine(
postgres://test:[EMAIL PROTECTED]/test,
strategy=threadlocal,
echo=True
)
metadata.bind = engine
session.configure(bind=engine)

print Creating tables
metadata.create_all()

print Seeding database
for i in range(10): House()
session.flush()

for house in sa.orm.Query(House).all():
for i in range(2):
owner = Owner()
house.owners.append(owner)
session.flush()

for house in sa.orm.Query(House).all():
for i in range(2):
dog = Dog()
house.dogs.append(dog)
session.flush()

for owner in sa.orm.Query(Owner).all():
for dog in sa.orm.Query(Dog).filter_by(house_id = owner.house_id).all():
friendship = Friendship()
friendship.dog = dog
owner.friendships.append(friendship)
session.commit()

owner = sa.orm.Query(Owner).first()
for f in owner.friendships:
print FRIENDSHIP: %s  || DOG: %s % (f.id, f.dog.id)

print Deleting owner
session.delete(owner)
session.flush()
session.commit()


2008/11/27 David Harrison [EMAIL PROTECTED]:
 Sorry, I should probably have mentioned that C isn't the only object
 that maps A, so a cascade doesn't work.

 2008/11/27  [EMAIL PROTECTED]:

 i'm not expert on these, but i think u need something like
 cascade='all' on your relation, _instead_ of the mapperExt. check the
 docs about possible settings. the mapperExt fires too late and the
 session flush-plan gets surprised.

 On Thursday 27 November 2008 08:15:04 David Harrison wrote:
 Hey all,

 I've got a situation where I have 2 object A and B, and a third
 object C that has a foreign key reference to both A and B.  I can
 have many C's that map to the same A.

 Now I've implemented a MapperExtension for C that has an
 after_delete function, and that function checks to see if the A
 that the deleted C was mapped to has any other mappings, and if
 there are no other mappings left, deletes the A.

 Now this works fine if I'm just deleting C's directly, however as
 soon as this happens during a cascade delete from some other object
 D that happens to have a mapping to C I get the below error - I'm
 assuming this is because sqlalchemy has a test condition that
 doesn't see my mapper coming, and freaks out when extra rows get
 nuked.

 ConcurrentModificationError: Deleted rowcount 0 does not match
 number of objects deleted 4

 Help ?

 Cheers
 Dave




 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at