Re: [sqlalchemy] Problems with 0.6beta1

2010-02-10 Thread Gaetan de Menten
On Wed, Feb 10, 2010 at 08:48, Chris chris.g@gmail.com wrote:
 Hi All

 Have just upgraded to this version and am having the following issue.

 I use a the execute method on a ScopedSession to run generic SQL
 Statements in a TextClause,  in the resulting BufferedRowResultProxy
 object their used to list field keys with was all the names of the
 fields in the result set. This appears to have been removed.
 How do a get a list of the field name for my query

IIRC, resultproxy.metadata.keys

-- 
Gaëtan de Menten

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Problems with 0.6beta1

2010-02-10 Thread Gaetan de Menten
On Wed, Feb 10, 2010 at 09:04, Gaetan de Menten gdemen...@gmail.com wrote:
 On Wed, Feb 10, 2010 at 08:48, Chris chris.g@gmail.com wrote:
 Hi All

 Have just upgraded to this version and am having the following issue.

 I use a the execute method on a ScopedSession to run generic SQL
 Statements in a TextClause,  in the resulting BufferedRowResultProxy
 object their used to list field keys with was all the names of the
 fields in the result set. This appears to have been removed.
 How do a get a list of the field name for my query

 IIRC, resultproxy.metadata.keys

Sorry, that should read: resulproxy._metadata.keys

-- 
Gaëtan de Menten

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] integer fields return long python type on query

2010-02-10 Thread Simone Orsi
Hi all,

I have a lot of integer fields in a mysql db and when I query them I get
always a long python type instead of an integer python type.

Is quite annoying to convert long to int every time... how can I get rid
of this?

I remember that with mysql-python you can pass an instance of
MySQLdb.converters to the cursor to automatically do that. Is there a
wrapper for this?

TIA,
Simone

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-10 Thread Kent
Thanks for the ticket 1681 consideration.  Though my understanding of
the software isn't strong enough to recommend (or understand) what you
are suggesting in 1681, I can observe the behavior enough to wonder
why do we need to go back to the database again?



  (Also, wondering if some databases allow a primary key to be null...)

 I've researched this in the past and they don't.   I will look into
 re-introducing allow_null_pks as a new flag allow_partial_pks,
 defaults to True, will be honored by merge(), you set yours to False.
 this is 0.6 only.


Thanks for your consideration, it seems that would be beneficial for
us.  As a side note, though, if no databases allow this, why would we
default to True instead of False?

Does allow_partial_pks have additional meaning, such as complain if
the object only has part of the primary key set?

You mentioned the main thing was how this affects outer joins.  Can
you expand on how this might cause outer joins to return no rows?  Is
it because users still expected a row returned from the *other*
tables, even though part of this key is null?
(I don't want to make you go back through the effort of re-adding this
flag if it might cause me unanticipated side-effects that force me to
abandon it, so maybe pointing me to an example of the main complaint
when setting it to False?  I'd like attempt to rule out that it might
affect me.)

Thanks


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] newbie to sqlalchemy :not null constraint

2010-02-10 Thread anusha kadambala
Thanks, Micheal for your help


On Wed, Feb 10, 2010 at 9:53 AM, Michael Trier mtr...@gmail.com wrote:


  hello all,
 
  I am newbie in sqlalchemy.I am thrilled by the sqlachemy features. But i
 got struck in the how to write the not null  for the following:
 
  create table organisation(orgcode varchar(30) not null,orgname text not
 null,primary key(orgcode));
 
  I have written the using declarative base as follows but dont know how to
 impose not null constraint

 You want to use the nullable=False argument.


 http://www.sqlalchemy.org/docs/reference/sqlalchemy/schema.html?highlight=nullable

 Thanks,

 Michael

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 


Njoy the share of Freedom :)
Anusha Kadambala

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] creating views declarative base is confusing

2010-02-10 Thread anusha kadambala
hello all,

I want to create view based on the following tables

from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker,scoped_session,relation,backref

engine = create_engine('postgresql:///try', echo=False)
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
password = Column(String)

def __init__(self,name, fullname, password):
self.name = name
self.fullname = fullname
self.password = password

def __repr__(self):
return User('%s','%s', '%s') % (self.name, self.fullname,
self.password)

users_table = User.__table__


class Office(Base):
__tablename__ = 'office'
cid = Column(Integer, primary_key=True)
name = Column(String)
org = Column(String)


def __init__(self,name, org):
self.name = name
self.org = org

def __repr__(self):
return User('%s','%s', '%s') % (self.name, self.org)

office_table = Office.__table__
metadata = Base.metadata
metadata.create_all(engine)
Session = scoped_session(sessionmaker(bind=engine))
Session.add_all([User('wendy', 'Wendy Williams', 'foobar'),User('mary',
'Mary Contrary', 'xxg527'),User('fred', 'Fred Flinstone',
'blah'),Office('wendy','Comet'),Office('kk','Comet')])
Session.commit()

I want to write a view class in which i can select user.name and
office.namewhere
user.id = office.id which reflects in the database.

I got something like this which i got when i google but i didnt understand
how it happens actually

http://groups.google.com/group/sqlalchemy/browse_thread/thread/cd4455178d886e73



-- 


Njoy the share of Freedom :)
Anusha Kadambala

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Problems with 0.6beta1

2010-02-10 Thread Michael Bayer
restored in r6742.

On Feb 10, 2010, at 2:48 AM, Chris wrote:

 Hi All
 
 Have just upgraded to this version and am having the following issue.
 
 I use a the execute method on a ScopedSession to run generic SQL
 Statements in a TextClause,  in the resulting BufferedRowResultProxy
 object their used to list field keys with was all the names of the
 fields in the result set. This appears to have been removed.
 How do a get a list of the field name for my query
 
 Chris
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] integer fields return long python type on query

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 5:12 AM, Simone Orsi wrote:

 Hi all,
 
 I have a lot of integer fields in a mysql db and when I query them I get
 always a long python type instead of an integer python type.
 
 Is quite annoying to convert long to int every time... how can I get rid
 of this?
 
 I remember that with mysql-python you can pass an instance of
 MySQLdb.converters to the cursor to automatically do that. Is there a
 wrapper for this?

you should use a TypeDecorator around Integer to process that.I didn't know 
there was a noticeable incompatibility between int and long - its a MySQLdb 
behavior and nobody has ever complained about it before.


 
 TIA,
 Simone
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 8:36 AM, Kent wrote:

 I've researched this in the past and they don't.   I will look into
 re-introducing allow_null_pks as a new flag allow_partial_pks,
 defaults to True, will be honored by merge(), you set yours to False.
 this is 0.6 only.
 
 
 Thanks for your consideration, it seems that would be beneficial for
 us.  As a side note, though, if no databases allow this, why would we
 default to True instead of False?

because people map to outerjoins (often).   then you get a partial PK.

 
 Does allow_partial_pks have additional meaning, such as complain if
 the object only has part of the primary key set?

not allowing partial pks means dont consider (2, None) to be a primary key - 
its treated like None.

 
 You mentioned the main thing was how this affects outer joins.  Can
 you expand on how this might cause outer joins to return no rows?  Is
 it because users still expected a row returned from the *other*
 tables, even though part of this key is null?
 (I don't want to make you go back through the effort of re-adding this
 flag if it might cause me unanticipated side-effects that force me to
 abandon it, so maybe pointing me to an example of the main complaint
 when setting it to False?  I'd like attempt to rule out that it might
 affect me.)

yes, an outerjoin can return a row for table A but NULL for table B.


its totally fine, we have flipped the defaults in 0.6 and we'd just be making 
the other behavior available again.


 
 Thanks
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] How to order_by relation by another join?

2010-02-10 Thread Andrija Zarić
Let's say I've got simple structure Order--Item--Detail.

class Detail(Base):
   ...

class Order(Base):
  ...

class Item(Base):
  ...
  detail = relation(Detail, uselist=False, lazy=False)
  order = relation(Order, uselist=False, backref='items')

Of course I can specify order_by for Order.items by any columns from
Item, but is there a way I can order_by a column defined in Detail?

I've naively tried something as backref('items', order_by=Detail.id),
but because Detail is anonymously joined to Item as e.g. 'details_1',
I've received ProgrammingError:  invalid reference to FROM-clause
entry for table details.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to order_by relation by another join?

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 10:17 AM, Andrija Zarić wrote:

 Let's say I've got simple structure Order--Item--Detail.
 
 class Detail(Base):
   ...
 
 class Order(Base):
  ...
 
 class Item(Base):
  ...
  detail = relation(Detail, uselist=False, lazy=False)
  order = relation(Order, uselist=False, backref='items')
 
 Of course I can specify order_by for Order.items by any columns from
 Item, but is there a way I can order_by a column defined in Detail?
 
 I've naively tried something as backref('items', order_by=Detail.id),
 but because Detail is anonymously joined to Item as e.g. 'details_1',
 I've received ProgrammingError:  invalid reference to FROM-clause
 entry for table details.

The Order.items collection doesn't have the ability to order by a remote column 
in the collection, unless you set the order_by to a subquery that joined out to 
the ultimate target you care about.   

Maybe, I haven't tried this, you could make an alternate (non primary) mapping 
to Item that was a join of Item and Detail, i.e. like:

itemdetail = mapper(Item.__table__.join(Detail.__table__), non_primary=True)

Order.items = relation(itemdetail, order_by=itemdetail.c.detail_id)

I'm not 100% sure the non-primary mapper allowing additional attributes, and it 
might need to be a mapping of a select().select_from(join) and not the join 
directly, but if you have some time to experiment you might get something out 
of that.








 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] creating views declarative base is confusing

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 5:35 AM, anusha kadambala wrote:

 hello all,
 
 I want to create view based on the following tables 
 
 from sqlalchemy import create_engine
 from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import sessionmaker,scoped_session,relation,backref
 
 engine = create_engine('postgresql:///try', echo=False)
 Base = declarative_base()
 class User(Base):
 __tablename__ = 'users'
 id = Column(Integer, primary_key=True)
 name = Column(String)
 fullname = Column(String)
 password = Column(String)
 
 def __init__(self,name, fullname, password):
 self.name = name
 self.fullname = fullname
 self.password = password
 
 def __repr__(self):
 return User('%s','%s', '%s') % (self.name, self.fullname, 
 self.password)
 
 users_table = User.__table__
 
 
 class Office(Base):
 __tablename__ = 'office'
 cid = Column(Integer, primary_key=True)
 name = Column(String)
 org = Column(String)

 
 def __init__(self,name, org):
 self.name = name
 self.org = org
 
 def __repr__(self):
 return User('%s','%s', '%s') % (self.name, self.org)
 
 office_table = Office.__table__
 metadata = Base.metadata
 metadata.create_all(engine)
 Session = scoped_session(sessionmaker(bind=engine))
 Session.add_all([User('wendy', 'Wendy Williams', 'foobar'),User('mary', 'Mary 
 Contrary', 'xxg527'),User('fred', 'Fred Flinstone', 
 'blah'),Office('wendy','Comet'),Office('kk','Comet')])
 Session.commit()
 
 I want to write a view class in which i can select user.name and office.name 
 where user.id = office.id which reflects in the database.
 
 I got something like this which i got when i google but i didnt understand 
 how it happens actually

the quickest way is to just map to a join:

j = users_table.join(office_table, users_table.c.id==office_table.c.cid)
class UserOffice(Base):
__table__ = j

# disambiguate office.name from users.name
office_name = j.c.office_name

print Session.query(UserOffice).filter(UserOffice.name=='wendy').all()
print Session.query(UserOffice).filter(UserOffice.office_name=='mary').all()

if you want to do a real CREATE VIEW, we have that new recipe at 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Views , but the mapper part of 
it would still look like:

class UserOffice(Base):
__table__ = myview

   # ...


 
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/cd4455178d886e73
 
 
 
 -- 
 
 
 Njoy the share of Freedom :)
 Anusha Kadambala
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to order_by relation by another join?

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 10:34 AM, Michael Bayer wrote:

 Maybe, I haven't tried this, you could make an alternate (non primary) 
 mapping to Item that was a join of Item and Detail, i.e. like:
 
 itemdetail = mapper(Item.__table__.join(Detail.__table__), non_primary=True)
 
 Order.items = relation(itemdetail, order_by=itemdetail.c.detail_id)
 
 I'm not 100% sure the non-primary mapper allowing additional attributes, and 
 it might need to be a mapping of a select().select_from(join) and not the 
 join directly, but if you have some time to experiment you might get 
 something out of that.

Here, I think it works:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite://', echo=True)
Base = declarative_base()

class Detail(Base):
__tablename__ = 'detail'

id = Column(Integer, primary_key=True)
item_id = Column(Integer, ForeignKey('item.id'))

class Order(Base):
__tablename__ = 'order'
id = Column(Integer, primary_key=True)

class Item(Base):
__tablename__ = 'item'
id = Column(Integer, primary_key=True)
order_id = Column(Integer, ForeignKey('order.id'))

detail = relation(Detail, uselist=False, lazy=False)
order = relation(Order, uselist=False)
 
j = Item.__table__.join(Detail.__table__)
itemdetail = mapper(Item, j, non_primary=True)
Order.items = relation(itemdetail, order_by=j.c.detail_id, viewonly=True)
 
metadata = Base.metadata
metadata.create_all(engine)
Session = scoped_session(sessionmaker(bind=engine))
Session.query(Order).options(eagerload(Order.items)).all()

the query is:

 SELECT order.id AS order_id, anon_1.item_id AS anon_1_item_id, 
anon_1.detail_id AS anon_1_detail_id, anon_1.item_order_id AS 
anon_1_item_order_id, anon_1.detail_item_id AS anon_1_detail_item_id 
FROM order LEFT OUTER JOIN (SELECT item.id AS item_id, item.order_id AS 
item_order_id, detail.id AS detail_id, detail.item_id AS detail_item_id 
FROM item JOIN detail ON item.id = detail.item_id) AS anon_1 ON order.id = 
anon_1.item_order_id ORDER BY anon_1.detail_id

However I can't currently get back_populates or backref to relate the two 
sides together.   which isn't necessary if you dont need it.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] is sqlalchemy designed to work with gc turned on only ?

2010-02-10 Thread redfox
I`m interesting in one question (I didn`t find answer in docs): Is
sqlalchemy designed to work with gc turned on only or not (in other
words: Does the sqlalchemy free unused objects itself if gc turned off
or not) ?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] is sqlalchemy designed to work with gc turned on only ?

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 11:12 AM, redfox wrote:

 I`m interesting in one question (I didn`t find answer in docs): Is
 sqlalchemy designed to work with gc turned on only or not (in other
 words: Does the sqlalchemy free unused objects itself if gc turned off
 or not) ?


All Python libraries I am familiar with are intended to work with GC on, 
including SQLAlchemy.  Some simpler libraries may be fine without GC turned on 
but that is not coverage we currently support in SQLAlchemy. The most we'd 
ever do is to eliminate all cycles within frequently created/dropped objects.   
This is currently largely the case although I can't speak for how complete that 
coverage is.

Note this does not include structural long-lasting objects like Table, 
MetaData, mapper(), etc., these very necessarily have cycles in them.



 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] bug in sqllite dialect?

2010-02-10 Thread Chris Withers

Hi All,

With SA 0.5.8 on Python 2.5, the attached test_with_default blows up with:

Traceback (most recent call last):
  File test_default_arg_sqlite.py, line 46, in test_with_default
peterb2 = session.query(PersonWITH).filter_by(name='PeterB').first()
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/query.py, 
line 1300, in first

ret = list(self[0:1])
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/query.py, 
line 1221, in __getitem__

return list(res)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/query.py, 
line 1422, in instances

rows = [process[0](context, row) for row in fetch]
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/query.py, 
line 2032, in main

return _instance(row, None)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py, 
line 1748, in _instance
populate_state(state, dict_, row, isnew, attrs, 
instancekey=identitykey)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py, 
line 1618, in populate_state

populator(state, dict_, row, isnew=isnew, **flags)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/strategies.py, 
line 120, in new_execute

dict_[key] = row[col]
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/engine/base.py, 
line 1348, in __getitem__

return self.__parent._get_col(self.__row, key)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/engine/base.py, 
line 1620, in _get_col

return processor(row[index])
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/databases/sqlite.py, 
line 183, in process

return fn(*[int(x or 0) for x in regexp.match(value).groups()])
TypeError: expected string or buffer

The problem is the default on the birthday column.
Is this a bug in the sqllite dialect or does the problem lie elsewhere?

cheers,

Chris
from decimal import Decimal
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import Column
from sqlalchemy.types import Integer, String, Numeric, Date
import datetime
import unittest
import sqlite3


sqlite3.register_adapter(Decimal, str)
sqlite3.register_converter('NUMERIC', Decimal)

class Test(unittest.TestCase):

def create_session(self,Base):
engine = create_engine('sqlite://',
   echo=False, 
   encoding='utf-8',  
   pool_recycle=3600,
   connect_args={'detect_types': 
sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES})
Base.metadata.create_all(engine)
return sessionmaker(bind=engine, autoflush=True, autocommit=False)()

def test_with_default(self):
Base = declarative_base()

class PersonWITH(Base):
__tablename__ = 'person_wi'

id = Column(Integer, primary_key=True)
name = Column(String(80), unique=True)
age = Column(Numeric(precision=36, scale=12))
money = Column(Numeric(precision=36, scale=12), index=True)
birthday = Column(Date, default=datetime.date.today)


session = self.create_session(Base)

try:
peterb = PersonWITH(name='PeterB', age='42.1')
session.add(peterb)
session.commit()

peterb2 = session.query(PersonWITH).filter_by(name='PeterB').first()
self.assertEquals(peterb, peterb2)
self.assertEquals(peterb.age,Decimal('42.1'))
self.assertEquals(peterb2.age,Decimal('42.1'))
self.assertTrue(isinstance(peterb2.age,Decimal))
finally:
session.close()

def test_without_default(self):
Base = declarative_base()

class PersonWO(Base):
__tablename__ = 'person_wo'

id = Column(Integer, primary_key=True)
name = Column(String(80), unique=True)
age = Column(Numeric(precision=36, scale=12))
money = Column(Numeric(precision=36, scale=12), index=True)
birthday = Column(Date)

session = self.create_session(Base)
try:
peterb = PersonWO(name='PeterB', age='42.1')
session.add(peterb)
session.commit()

peterb2 = session.query(PersonWO).filter_by(name='PeterB').first()
self.assertEquals(peterb, peterb2)
self.assertEquals(peterb.age,Decimal('42.1'))
self.assertEquals(peterb2.age,Decimal('42.1'))
self.assertTrue(isinstance(peterb2.age,Decimal))
finally:
session.close()

if __name__ == __main__:
unittest.main()
-- 
You received 

Re: [sqlalchemy] bug in sqllite dialect?

2010-02-10 Thread Michael Bayer

not sure if this is obvious, its the sqlite3.PARSE_DECLTYPES.   The SQLite date 
types don't expect this to be turned on.   That is a handy feature which I'm 
not sure was available in such a simple form when I first wrote against the 
pysqlite dialect in Python 2.3.

A workaround is to use a dummy Date type that returns None for 
bind_processor() and result_processor().

I don't see any accessor on the SQLite connection that could tell us if this 
flag is enabled.  We don't want to do an isinstance() because those are quite 
expensive.

So what we can do here is utilize 0.6's test the connection trick, to issue a 
select current_timestamp() from the SQLite connection, and if it comes back 
as datetime we'd assume PARSE_DECLTYPES is on, or at least some kind of 
date-based processor has been added.  then the SQLite date types would consult 
this flag.  I added #1685 for this which is tentatively targeted at 0.6.0 just 
so I dont lose track of it.

We might want to look into having 0.6 set a default handler for date types in 
any case, would need to ensure its completely compatible with what we're doing 
now.

Also not sure if you're aware, pool_recycle is not advisable with a :memory: 
database.   it would zap out your DB.   sqlite also doesnt require any 
encoding since it only accepts unicode strings - the param is unused by 
SQLalchemy with sqlite. 






On Feb 10, 2010, at 12:12 PM, Chris Withers wrote:

 Hi All,
 
 With SA 0.5.8 on Python 2.5, the attached test_with_default blows up with:
 
 Traceback (most recent call last):
  File test_default_arg_sqlite.py, line 46, in test_with_default
peterb2 = session.query(PersonWITH).filter_by(name='PeterB').first()
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/query.py,
  line 1300, in first
ret = list(self[0:1])
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/query.py,
  line 1221, in __getitem__
return list(res)
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/query.py,
  line 1422, in instances
rows = [process[0](context, row) for row in fetch]
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/query.py,
  line 2032, in main
return _instance(row, None)
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
  line 1748, in _instance
populate_state(state, dict_, row, isnew, attrs, instancekey=identitykey)
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
  line 1618, in populate_state
populator(state, dict_, row, isnew=isnew, **flags)
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/strategies.py,
  line 120, in new_execute
dict_[key] = row[col]
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/engine/base.py,
  line 1348, in __getitem__
return self.__parent._get_col(self.__row, key)
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/engine/base.py,
  line 1620, in _get_col
return processor(row[index])
  File 
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/databases/sqlite.py,
  line 183, in process
return fn(*[int(x or 0) for x in regexp.match(value).groups()])
 TypeError: expected string or buffer
 
 The problem is the default on the birthday column.
 Is this a bug in the sqllite dialect or does the problem lie elsewhere?
 
 cheers,
 
 Chris
 from decimal import Decimal
 from sqlalchemy import create_engine
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.schema import Column
 from sqlalchemy.types import Integer, String, Numeric, Date
 import datetime
 import unittest
 import sqlite3
 
 
 sqlite3.register_adapter(Decimal, str)
 sqlite3.register_converter('NUMERIC', Decimal)
 
 class Test(unittest.TestCase):
 
def create_session(self,Base):
engine = create_engine('sqlite://',
   echo=False, 
   encoding='utf-8',  
   pool_recycle=3600,
   connect_args={'detect_types': 
 sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES})
Base.metadata.create_all(engine)
return sessionmaker(bind=engine, autoflush=True, autocommit=False)()
 
def test_with_default(self):
Base = declarative_base()
 
class PersonWITH(Base):
__tablename__ = 'person_wi'
 
id = Column(Integer, primary_key=True)
name = Column(String(80), unique=True)
age = Column(Numeric(precision=36, scale=12))
money = Column(Numeric(precision=36, scale=12), index=True)
birthday = Column(Date, default=datetime.date.today)
 
 
session = self.create_session(Base)
 
try:
peterb = PersonWITH(name='PeterB', 

[sqlalchemy] pairing merged object with the original

2010-02-10 Thread Kent
After merge() returns, is there a way for me to pair each object in
the returned merge_obj with the object it was created from?

For example:
merged_obj = session.merge(object)

At the top level, it is trivial, merged_obj was created because of the
instance object

For single RelationProperties under the top level, it is fairly
simple, too.

That is:

merged.childattr was merged from object.childattr

Where it falls apart I think is if the RelationProperty.use_list ==
True

merged.list came from object.list, but is there a way for me to
reference the original objects inside the list.

Did merged.list[0] come from object.list[0] or object.list[1] or
object_list[2]?

I particularly can't use the pk because it won't always be set (often
this will be a new record)

Any suggestions?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson
First spin though, I get these errors/warnings:

/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'objid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'LONG RAW' of column 'data'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'arowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'browid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'crowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'at_rowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'drowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_a'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_b'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_c'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'textkey'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'object_id'
  ret = fn(self, con, *args, **kw)
2010-02-10 14:00:33,891 ERROR [SiteError] 
http://portal-dev.craryindustries.com/testsa.html
Traceback (most recent call last):
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 133, in publish
result = publication.callObject(request, obj)
  File /home/zope/.buildout/eggs/grok-1.0-py2.5.egg/grok/publication.py, line 
89, in callObject
return super(ZopePublicationSansProxy, self).callObject(request, ob)
  File 
/home/zope/.buildout/eggs/zope.app.publication-3.4.3-py2.5.egg/zope/app/publication/zopepublication.py,
 line 167, in callObject
return mapply(ob, request.getPositionalArguments(), request)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 108, in mapply
return debug_call(obj, args)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 114, in debug_call
return obj(*args)
  File 
/home/zope/.buildout/eggs/grokcore.view-1.12.2-py2.5.egg/grokcore/view/components.py,
 line 64, in __call__
return mapply(self.render, (), self.request)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 108, in mapply
return debug_call(obj, args)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 114, in debug_call
return obj(*args)
  File 
/home/zope/development/grokenv/projects/portal/src/portal/lib/db/sql_alchemy_test.py,
 line 30, in render
session = rdb.Session()
  File 
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/orm/scoping.py, 
line 52, in __call__
return self.registry()
  File build/bdist.linux-i686/egg/sqlalchemy/util.py, line 1225, in __call__
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/scopedsession.py,
 line 12, in session_factory
return utility.sessionFactory()
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/utility.py,
 line 70, in sessionFactory
kw['bind'] = engine_factory()
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/utility.py,
 line 165, in __call__
notify(EngineCreatedEvent(engine))
  File 
/home/zope/.buildout/eggs/zope.event-3.4.0-py2.5.egg/zope/event/__init__.py, 
line 23, in notify
subscriber(event)
  File 
/home/zope/.buildout/eggs/zope.component-3.4.0-py2.5.egg/zope/component/event.py,
 line 26, in dispatch
for ignored in zope.component.subscribers(event, None):
  File 

Re: [sqlalchemy] pairing merged object with the original

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 2:49 PM, Kent wrote:

 After merge() returns, is there a way for me to pair each object in
 the returned merge_obj with the object it was created from?
 
 For example:
 merged_obj = session.merge(object)
 
 At the top level, it is trivial, merged_obj was created because of the
 instance object
 
 For single RelationProperties under the top level, it is fairly
 simple, too.
 
 That is:
 
 merged.childattr was merged from object.childattr
 
 Where it falls apart I think is if the RelationProperty.use_list ==
 True
 
 merged.list came from object.list, but is there a way for me to
 reference the original objects inside the list.
 
 Did merged.list[0] come from object.list[0] or object.list[1] or
 object_list[2]?
 
 I particularly can't use the pk because it won't always be set (often
 this will be a new record)
 
 Any suggestions?

the ordering of those lists (assuming they are lists and not sets) are 
deterministic, especially with regards to the pending objects that have been 
added as a result of your merge (i.e. the ones that wont have complete primary 
keys).   I would match them up based on comparison of the list of instances 
that are transient/pending.





 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] RE: Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson
BTW, this is using 0.6 beta1 build 6743 on Grok, reflecting a view from an 
Oracle (10.2) 10g DB.

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Jeff Peterson
Sent: Wednesday, February 10, 2010 2:29 PM
To: sqlalchemy@googlegroups.com
Subject: [sqlalchemy] Warnings take a really long time / NotImplementedError

First spin though, I get these errors/warnings:

/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'objid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'LONG RAW' of column 'data'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'arowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'browid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'crowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'at_rowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'drowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_a'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_b'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_c'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'textkey'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'object_id'
  ret = fn(self, con, *args, **kw)
2010-02-10 14:00:33,891 ERROR [SiteError] 
http://portal-dev.craryindustries.com/testsa.html
Traceback (most recent call last):
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 133, in publish
result = publication.callObject(request, obj)
  File /home/zope/.buildout/eggs/grok-1.0-py2.5.egg/grok/publication.py, line 
89, in callObject
return super(ZopePublicationSansProxy, self).callObject(request, ob)
  File 
/home/zope/.buildout/eggs/zope.app.publication-3.4.3-py2.5.egg/zope/app/publication/zopepublication.py,
 line 167, in callObject
return mapply(ob, request.getPositionalArguments(), request)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 108, in mapply
return debug_call(obj, args)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 114, in debug_call
return obj(*args)
  File 
/home/zope/.buildout/eggs/grokcore.view-1.12.2-py2.5.egg/grokcore/view/components.py,
 line 64, in __call__
return mapply(self.render, (), self.request)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 108, in mapply
return debug_call(obj, args)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 114, in debug_call
return obj(*args)
  File 
/home/zope/development/grokenv/projects/portal/src/portal/lib/db/sql_alchemy_test.py,
 line 30, in render
session = rdb.Session()
  File 
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/orm/scoping.py, 
line 52, in __call__
return self.registry()
  File build/bdist.linux-i686/egg/sqlalchemy/util.py, line 1225, in __call__
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/scopedsession.py,
 line 12, in session_factory
return utility.sessionFactory()
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/utility.py,
 line 70, in sessionFactory
kw['bind'] = engine_factory()
  File 

Re: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 3:28 PM, Jeff Peterson wrote:

 First spin though, I get these errors/warnings:
  
 /home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
  SAWarning: Did not recognize type 'ROWID' of column 'objid'
   ret = fn(self, con, *args, **kw)
 /home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
  SAWarning: Did not recognize type 'LONG RAW' of column 'data'
   ret = fn(self, con, *args, **kw)
 /home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
  SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
   ret = fn(self, con, *args, **kw)


these are oracle column types that aren't present in the reflected types list.  
 this error is harmless (assuming you don't issue CREATE TABLE like you're 
doing later).

   File 
 /home/zope/.buildout/eggs/megrok.rdb-0.10-py2.5.egg/megrok/rdb/setup.py, 
 line 93, in createTables
 metadata.create_all(engine)
 NotImplementedError: Can't generate DDL for the null type

this is more of a problem.  you're reflecting views (and I assume table 
objects) from your database, and then emitting metadata.create_all() - the 
views you've reflected are assumed to be tables, which don't exist, and it 
attempts to issue CREATE TABLE for them, and fails due to the types missing 
above (but luckily, else it would generate a new table for every view).

You shouldn't be calling create_all().  Especially not in a web application 
when it starts up, thats kind of crazy, and also not when your application 
receives its table metadata through reflection.

 Is there any way to skip/speed up the unrecognized column warnings? 

what evidence do you have that the warnings themselves are slow ?   just 
because thats what you see actually dumped in your output has no relevance to 
the work that is actually going on, in this case, the vast majority of columns 
being reflected that do *not* generate any warning, since you would appear to 
be reflecting at least 12 views.  Reflecting a whole database is not a quick 
operation.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: pairing merged object with the original

2010-02-10 Thread Kent
If I understand you correctly, you are saying
object.list[0] will always cause creation (or fetch) of merged.list[0]
object.list[1] will always cause creation (or fetch) of merged.list[1]
etc.

There may be also more merged.list[2], [3], etc...

Correct?

This is the merge code 0.5.8:

if self.uselist:
dest_list = []
for current in instances:
_recursive[(current, self)] = True
obj = session._merge(current, dont_load=dont_load,
_recursive=_recursive)
if obj is not None:
dest_list.append(obj)
if dont_load:
coll = attributes.init_collection(dest_state,
self.key)
for c in dest_list:
coll.append_without_event(c)
else:
getattr(dest.__class__,
self.key).impl._set_iterable(dest_state, dest_dict, dest_list)


Can I rely this implementation remaining ordered (deterministic), even
if it is re-written for optimization purposes or something?

Also, I see that if obj is None, then dest_list.append() won't be
called, which would mess up my indexes.  I am wondering is there a
more sure mechanism?  Under what circumstances will obj be None?




On Feb 10, 3:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 10, 2010, at 2:49 PM, Kent wrote:



  After merge() returns, is there a way for me to pair each object in
  the returned merge_obj with the object it was created from?

  For example:
  merged_obj = session.merge(object)

  At the top level, it is trivial, merged_obj was created because of the
  instance object

  For single RelationProperties under the top level, it is fairly
  simple, too.

  That is:

  merged.childattr was merged from object.childattr

  Where it falls apart I think is if the RelationProperty.use_list ==
  True

  merged.list came from object.list, but is there a way for me to
  reference the original objects inside the list.

  Did merged.list[0] come from object.list[0] or object.list[1] or
  object_list[2]?

  I particularly can't use the pk because it won't always be set (often
  this will be a new record)

  Any suggestions?

 the ordering of those lists (assuming they are lists and not sets) are 
 deterministic, especially with regards to the pending objects that have been 
 added as a result of your merge (i.e. the ones that wont have complete 
 primary keys).   I would match them up based on comparison of the list of 
 instances that are transient/pending.



  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Problems with 0.6beta1

2010-02-10 Thread Chris
Thanks


On 10 Feb, 14:55, Michael Bayer mike...@zzzcomputing.com wrote:
 restored in r6742.

 On Feb 10, 2010, at 2:48 AM, Chris wrote:



  Hi All

  Have just upgraded to this version and am having the following issue.

  I use a the execute method on a ScopedSession to run generic SQL
  Statements in a TextClause,  in the resulting BufferedRowResultProxy
  object their used to list field keys with was all the names of the
  fields in the result set. This appears to have been removed.
  How do a get a list of the field name for my query

  Chris

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson


--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Wednesday, February 10, 2010 2:38 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Warnings take a really long time / NotImplementedError


On Feb 10, 2010, at 3:28 PM, Jeff Peterson wrote:


First spin though, I get these errors/warnings:

/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'objid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'LONG RAW' of column 'data'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
  ret = fn(self, con, *args, **kw)


these are oracle column types that aren't present in the reflected types list.  
 this error is harmless (assuming you don't issue CREATE TABLE like you're 
doing later).

  File 
/home/zope/.buildout/eggs/megrok.rdb-0.10-py2.5.egg/megrok/rdb/setup.py, line 
93, in createTables
metadata.create_all(engine)
NotImplementedError: Can't generate DDL for the null type

this is more of a problem.  you're reflecting views (and I assume table 
objects) from your database, and then emitting metadata.create_all() - the 
views you've reflected are assumed to be tables, which don't exist, and it 
attempts to issue CREATE TABLE for them, and fails due to the types missing 
above (but luckily, else it would generate a new table for every view).

You shouldn't be calling create_all().  Especially not in a web application 
when it starts up, thats kind of crazy, and also not when your application 
receives its table metadata through reflection.

The module calling create_all() is a third party lib for Grok, which clearly 
doesn't handle reflection very well as it makes that create call regardless.  
It may be I need to step back and try and handle the SQLA stuff on my own.

Is there any way to skip/speed up the unrecognized column warnings?

what evidence do you have that the warnings themselves are slow ?   just 
because thats what you see actually dumped in your output has no relevance to 
the work that is actually going on, in this case, the vast majority of columns 
being reflected that do *not* generate any warning, since you would appear to 
be reflecting at least 12 views.  Reflecting a whole database is not a quick 
operation.

I am, in fact, only reflecting one view, but you got me to thinking, that view 
has 50+ columns and joins in two other views, does it attempt to reflects all 
the tables/view that make up that view?


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson
From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Wednesday, February 10, 2010 2:38 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Warnings take a really long time / NotImplementedError


On Feb 10, 2010, at 3:28 PM, Jeff Peterson wrote:

First spin though, I get these errors/warnings:

/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'objid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'LONG RAW' of column 'data'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
  ret = fn(self, con, *args, **kw)


these are oracle column types that aren't present in the reflected types list.  
 this error is harmless (assuming you don't issue CREATE TABLE like you're 
doing later).

  File 
/home/zope/.buildout/eggs/megrok.rdb-0.10-py2.5.egg/megrok/rdb/setup.py, line 
93, in createTables
metadata.create_all(engine)
NotImplementedError: Can't generate DDL for the null type

this is more of a problem.  you're reflecting views (and I assume table 
objects) from your database, and then emitting metadata.create_all() - the 
views you've reflected are assumed to be tables, which don't exist, and it 
attempts to issue CREATE TABLE for them, and fails due to the types missing 
above (but luckily, else it would generate a new table for every view).

You shouldn't be calling create_all().  Especially not in a web application 
when it starts up, thats kind of crazy, and also not when your application 
receives its table metadata through reflection.

The module calling create_all() is a third party lib for Grok, which clearly 
doesn't handle reflection very well as it makes that create call regardless.  
It may be I need to step back and try and handle the SQLA stuff on my own.

Is there any way to skip/speed up the unrecognized column warnings?

what evidence do you have that the warnings themselves are slow ?   just 
because thats what you see actually dumped in your output has no relevance to 
the work that is actually going on, in this case, the vast majority of columns 
being reflected that do *not* generate any warning, since you would appear to 
be reflecting at least 12 views.  Reflecting a whole database is not a quick 
operation.

I am, in fact, only reflecting one view, but you got me to thinking, that view 
has 50+ columns and joins in two other views, does it attempt to reflects all 
the tables/view that make up that view?

However, I had made some changes to the lib to allow the reflected views to be 
skipped basically removing them from the metadata, which worked, the views 
were reflected but it still took 30-40 seconds for it to reflect this one view.

The code for this:

class CrarySalesPart(rdb.Model):
rdb.reflected()
rdb.tablename('crary_sales_part')
rdb.tableargs(schema='crar1app', useexisting=True)

contract = Column('contract', String, nullable=False, primary_key=True)
catalog_no = Column('catalog_no', String, nullable=False, primary_key=True)

class Index(grok.View):
grok.context(Portal)
grok.name('testsa.html')

def render(self):
session = rdb.Session()
sp = 
session.query(CrarySalesPart).filter(CrarySalesPart.contract=='20').limit(10)
msg = ''.join(['p style=padding:0;margin:0%s: %s, %s/p' % 
(o.catalog_no, o.catalog_desc, o.part_product_code) for o in sp])

return htmlhead/headbody%s/body/html % msg

The first time I render that view, the reflection takes place and it takes the 
30-40 seconds to load the page (during which time the warnings are being 
generated), once it's mapped it is very fast.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.
--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: pairing merged object with the original

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 3:52 PM, Kent wrote:

 If I understand you correctly, you are saying
 object.list[0] will always cause creation (or fetch) of merged.list[0]
 object.list[1] will always cause creation (or fetch) of merged.list[1]
 etc.
 
 There may be also more merged.list[2], [3], etc...
 
 Correct?
 
 This is the merge code 0.5.8:
 
if self.uselist:
dest_list = []
for current in instances:
_recursive[(current, self)] = True
obj = session._merge(current, dont_load=dont_load,
 _recursive=_recursive)
if obj is not None:
dest_list.append(obj)
if dont_load:
coll = attributes.init_collection(dest_state,
 self.key)
for c in dest_list:
coll.append_without_event(c)
else:
getattr(dest.__class__,
 self.key).impl._set_iterable(dest_state, dest_dict, dest_list)
 
 
 Can I rely this implementation remaining ordered (deterministic), even
 if it is re-written for optimization purposes or something?

as long as you're using lists for your relations' collection implementations 
there's no reason the order of pending/transients would change.  The objects 
coming back from the DB are not deterministic unless you add order_by to your 
relation, but thats why i said process those separately.

 
 Also, I see that if obj is None, then dest_list.append() won't be
 called, which would mess up my indexes.  I am wondering is there a
 more sure mechanism?  Under what circumstances will obj be None?

There's no codepath I can see where that can be None and there's no test that 
generates a None at that point, I'm not really sure why that check is there.   
I'd want to dig back to find its origins before removing it but _merge() pretty 
explicitly doesn't return None these days.




 
 
 
 
 On Feb 10, 3:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 10, 2010, at 2:49 PM, Kent wrote:
 
 
 
 After merge() returns, is there a way for me to pair each object in
 the returned merge_obj with the object it was created from?
 
 For example:
 merged_obj = session.merge(object)
 
 At the top level, it is trivial, merged_obj was created because of the
 instance object
 
 For single RelationProperties under the top level, it is fairly
 simple, too.
 
 That is:
 
 merged.childattr was merged from object.childattr
 
 Where it falls apart I think is if the RelationProperty.use_list ==
 True
 
 merged.list came from object.list, but is there a way for me to
 reference the original objects inside the list.
 
 Did merged.list[0] come from object.list[0] or object.list[1] or
 object_list[2]?
 
 I particularly can't use the pk because it won't always be set (often
 this will be a new record)
 
 Any suggestions?
 
 the ordering of those lists (assuming they are lists and not sets) are 
 deterministic, especially with regards to the pending objects that have been 
 added as a result of your merge (i.e. the ones that wont have complete 
 primary keys).   I would match them up based on comparison of the list of 
 instances that are transient/pending.
 
 
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 4:13 PM, Jeff Peterson wrote:
  
 The first time I render that view, the reflection takes place and it takes 
 the 30-40 seconds to load the page (during which time the warnings are being 
 generated), once it’s mapped it is very fast.

You should probably have reflection occur when your app starts, not when a page 
is hit, but anyway, I dont know why that view takes 40 seconds to reflect and I 
would advise enabling SQL echoing to see what queries are taking place and 
roughly how long they take to complete.



  
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: pairing merged object with the original

2010-02-10 Thread Kent
Very good, thanks.

Although, I'm pretty sure I understand what you are saying, what
exactly do you mean by pending/transients?


On Feb 10, 4:13 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 10, 2010, at 3:52 PM, Kent wrote:



  If I understand you correctly, you are saying
  object.list[0] will always cause creation (or fetch) of merged.list[0]
  object.list[1] will always cause creation (or fetch) of merged.list[1]
  etc.

  There may be also more merged.list[2], [3], etc...

  Correct?

  This is the merge code 0.5.8:

         if self.uselist:
             dest_list = []
             for current in instances:
                 _recursive[(current, self)] = True
                 obj = session._merge(current, dont_load=dont_load,
  _recursive=_recursive)
                 if obj is not None:
                     dest_list.append(obj)
             if dont_load:
                 coll = attributes.init_collection(dest_state,
  self.key)
                 for c in dest_list:
                     coll.append_without_event(c)
             else:
                 getattr(dest.__class__,
  self.key).impl._set_iterable(dest_state, dest_dict, dest_list)

  Can I rely this implementation remaining ordered (deterministic), even
  if it is re-written for optimization purposes or something?

 as long as you're using lists for your relations' collection implementations 
 there's no reason the order of pending/transients would change.  The objects 
 coming back from the DB are not deterministic unless you add order_by to your 
 relation, but thats why i said process those separately.



  Also, I see that if obj is None, then dest_list.append() won't be
  called, which would mess up my indexes.  I am wondering is there a
  more sure mechanism?  Under what circumstances will obj be None?

 There's no codepath I can see where that can be None and there's no test that 
 generates a None at that point, I'm not really sure why that check is there.  
  I'd want to dig back to find its origins before removing it but _merge() 
 pretty explicitly doesn't return None these days.



  On Feb 10, 3:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Feb 10, 2010, at 2:49 PM, Kent wrote:

  After merge() returns, is there a way for me to pair each object in
  the returned merge_obj with the object it was created from?

  For example:
  merged_obj = session.merge(object)

  At the top level, it is trivial, merged_obj was created because of the
  instance object

  For single RelationProperties under the top level, it is fairly
  simple, too.

  That is:

  merged.childattr was merged from object.childattr

  Where it falls apart I think is if the RelationProperty.use_list ==
  True

  merged.list came from object.list, but is there a way for me to
  reference the original objects inside the list.

  Did merged.list[0] come from object.list[0] or object.list[1] or
  object_list[2]?

  I particularly can't use the pk because it won't always be set (often
  this will be a new record)

  Any suggestions?

  the ordering of those lists (assuming they are lists and not sets) are 
  deterministic, especially with regards to the pending objects that have 
  been added as a result of your merge (i.e. the ones that wont have 
  complete primary keys).   I would match them up based on comparison of the 
  list of instances that are transient/pending.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: pairing merged object with the original

2010-02-10 Thread Kent
Further, if I inspect the returned object *directly* after the call to
merge(), then aren't I guaranteed any Relations with use_list=True
have will have the same length, since that is the point of merge in
the first place?

That being the case, I can always simply correspond the merged index
with the original instances, correct (regardless of whether it is a
newly created object or was get()'ed from the database)?

Correct?


On Feb 10, 4:28 pm, Kent k...@retailarchitects.com wrote:
 Very good, thanks.

 Although, I'm pretty sure I understand what you are saying, what
 exactly do you mean by pending/transients?

 On Feb 10, 4:13 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  On Feb 10, 2010, at 3:52 PM, Kent wrote:

   If I understand you correctly, you are saying
   object.list[0] will always cause creation (or fetch) of merged.list[0]
   object.list[1] will always cause creation (or fetch) of merged.list[1]
   etc.

   There may be also more merged.list[2], [3], etc...

   Correct?

   This is the merge code 0.5.8:

          if self.uselist:
              dest_list = []
              for current in instances:
                  _recursive[(current, self)] = True
                  obj = session._merge(current, dont_load=dont_load,
   _recursive=_recursive)
                  if obj is not None:
                      dest_list.append(obj)
              if dont_load:
                  coll = attributes.init_collection(dest_state,
   self.key)
                  for c in dest_list:
                      coll.append_without_event(c)
              else:
                  getattr(dest.__class__,
   self.key).impl._set_iterable(dest_state, dest_dict, dest_list)

   Can I rely this implementation remaining ordered (deterministic), even
   if it is re-written for optimization purposes or something?

  as long as you're using lists for your relations' collection 
  implementations there's no reason the order of pending/transients would 
  change.  The objects coming back from the DB are not deterministic unless 
  you add order_by to your relation, but thats why i said process those 
  separately.

   Also, I see that if obj is None, then dest_list.append() won't be
   called, which would mess up my indexes.  I am wondering is there a
   more sure mechanism?  Under what circumstances will obj be None?

  There's no codepath I can see where that can be None and there's no test 
  that generates a None at that point, I'm not really sure why that check is 
  there.   I'd want to dig back to find its origins before removing it but 
  _merge() pretty explicitly doesn't return None these days.

   On Feb 10, 3:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
   On Feb 10, 2010, at 2:49 PM, Kent wrote:

   After merge() returns, is there a way for me to pair each object in
   the returned merge_obj with the object it was created from?

   For example:
   merged_obj = session.merge(object)

   At the top level, it is trivial, merged_obj was created because of the
   instance object

   For single RelationProperties under the top level, it is fairly
   simple, too.

   That is:

   merged.childattr was merged from object.childattr

   Where it falls apart I think is if the RelationProperty.use_list ==
   True

   merged.list came from object.list, but is there a way for me to
   reference the original objects inside the list.

   Did merged.list[0] come from object.list[0] or object.list[1] or
   object_list[2]?

   I particularly can't use the pk because it won't always be set (often
   this will be a new record)

   Any suggestions?

   the ordering of those lists (assuming they are lists and not sets) are 
   deterministic, especially with regards to the pending objects that have 
   been added as a result of your merge (i.e. the ones that wont have 
   complete primary keys).   I would match them up based on comparison of 
   the list of instances that are transient/pending.

   --
   You received this message because you are subscribed to the Google 
   Groups sqlalchemy group.
   To post to this group, send email to sqlalch...@googlegroups.com.
   To unsubscribe from this group, send email to 
   sqlalchemy+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/sqlalchemy?hl=en.

   --
   You received this message because you are subscribed to the Google Groups 
   sqlalchemy group.
   To post to this group, send email to sqlalch...@googlegroups.com.
   To unsubscribe from this group, send email to 
   sqlalchemy+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 

[sqlalchemy] Collation and Column()

2010-02-10 Thread Christoph Burgmer
Having collations on a per-column basis in MySQL and SQLite, I'd like
to specify the collation when creating tables/views. I have been
trying google and source for some hours now, and it seems there's just
no way to handle it easily with SA.
MySQL has support via private VARCHAR, but SQLite has nothing similar.
Generic support is absent.

Could a simple CP of this functionality give SQLite similar support?
I'd file a patch then.

Is there any other way to select a collation via SA, either globally,
or via text() or similar?

I'd hate to completly rewrite the CREATE TABLE statements.

-Christoph

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] simplest pattern to query a database

2010-02-10 Thread Tarek Ziadé
Hi,

I am using this pattern to execute a simple query on a database:

def execute(sqluri, query):
engine = create_engine(sqluri)
connection = engine.connect()
try:
connection.execute(query)
finally:
connection.close()

I was wondering if this was the best pattern, or if something simpler exists..


Regards
Tarek

-- 
Tarek Ziadé | http://ziade.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson


--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Wednesday, February 10, 2010 3:18 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Warnings take a really long time / NotImplementedError


On Feb 10, 2010, at 4:13 PM, Jeff Peterson wrote:

The first time I render that view, the reflection takes place and it takes the 
30-40 seconds to load the page (during which time the warnings are being 
generated), once it's mapped it is very fast.

You should probably have reflection occur when your app starts, not when a page 
is hit, but anyway, I dont know why that view takes 40 seconds to reflect and I 
would advise enabling SQL echoing to see what queries are taking place and 
roughly how long they take to complete.

It's touching a ton of tables, hundreds...if I had to guess I'd say every table 
in the schema.  The reasons for this are unknown to me, certainly all those 
tables are not related specifically to the single view I am attempting to 
reflect.

The other thing I am noticing is the code in the lib does this:

reflectTables(metadata)
createTables(metadata)
notify(DatabaseSetupEvent(metadata))

reflectTables does, among other things, a metadata.reflect(bind=engine)

createTables looks like:

def createTables(metadata):
Create class-specified tables.

engine = Engine()
metadata.create_all(engine)


So, the metadata has (or could have) both reflected tables/views and tables 
needing to be created, which would be fine assuming the reflected class is 
actually a table in the DB, which we know in this case it isn't it's a view and 
so it tried to create it.  So, the issue is what's the best solution for 
stopping this?  Should the reflected views simply be removed from the metadata 
after reflection? Or is there some simpler answer?

TIA,

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 
sqlalchemy@googlegroups.commailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.commailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.
--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 
sqlalchemy@googlegroups.commailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.commailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 
sqlalchemy@googlegroups.commailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.commailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Warning: Incorrect datetime value

2010-02-10 Thread Bobby Impollonia
I am constructing queries involving MSTimeStamp fields in MySQL and am
being receiving Incorrect datetime value warnings even in
situations where my queries are valid in MySQL.

How do I modify the following query so that sqlalchemy will accept it
without warning:
session.query(MyClass.id).filter(MyClass.timestamp  (func.now() -
2)).first()
?

The generated sql is
SELECT my_table.id AS my_table_id
FROM my_table
WHERE my_table.timestamp  now() - %s
 LIMIT 0, 1

The warning I get is
/usr/lib/pymodules/python2.5/sqlalchemy/engine/default.py:123:
Warning: Incorrect datetime value: '20100209953011.00' for column
'timestamp' at row 1
  cursor.execute(statement, parameters)

Why is sqlalchemy even seeing the value '20100209953011.00', which
is the outcome of the now() - 2 operation inside the query?

I tried changing
(func.now() -  2)
to
func.timestamp (func.now() -  2)
but I still get the same warning.

If I change 2 to 200, I don't get warned. Why would this be?

It works with no warning if I change it to
session.query(MyClass.id).filter(func.now() - MyClass.timestamp  2).first()
, but that is not an acceptable solution because now MySQL can't use
my index on MyClass.timestamp.

I am using SQLA .5.5

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: pairing merged object with the original

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 4:28 PM, Kent wrote:

 Very good, thanks.
 
 Although, I'm pretty sure I understand what you are saying, what
 exactly do you mean by pending/transients?

see the description here:

http://www.sqlalchemy.org/docs/session.html#quickie-intro-to-object-states


 
 
 On Feb 10, 4:13 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 10, 2010, at 3:52 PM, Kent wrote:
 
 
 
 If I understand you correctly, you are saying
 object.list[0] will always cause creation (or fetch) of merged.list[0]
 object.list[1] will always cause creation (or fetch) of merged.list[1]
 etc.
 
 There may be also more merged.list[2], [3], etc...
 
 Correct?
 
 This is the merge code 0.5.8:
 
if self.uselist:
dest_list = []
for current in instances:
_recursive[(current, self)] = True
obj = session._merge(current, dont_load=dont_load,
 _recursive=_recursive)
if obj is not None:
dest_list.append(obj)
if dont_load:
coll = attributes.init_collection(dest_state,
 self.key)
for c in dest_list:
coll.append_without_event(c)
else:
getattr(dest.__class__,
 self.key).impl._set_iterable(dest_state, dest_dict, dest_list)
 
 Can I rely this implementation remaining ordered (deterministic), even
 if it is re-written for optimization purposes or something?
 
 as long as you're using lists for your relations' collection implementations 
 there's no reason the order of pending/transients would change.  The objects 
 coming back from the DB are not deterministic unless you add order_by to 
 your relation, but thats why i said process those separately.
 
 
 
 Also, I see that if obj is None, then dest_list.append() won't be
 called, which would mess up my indexes.  I am wondering is there a
 more sure mechanism?  Under what circumstances will obj be None?
 
 There's no codepath I can see where that can be None and there's no test 
 that generates a None at that point, I'm not really sure why that check is 
 there.   I'd want to dig back to find its origins before removing it but 
 _merge() pretty explicitly doesn't return None these days.
 
 
 
 On Feb 10, 3:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 10, 2010, at 2:49 PM, Kent wrote:
 
 After merge() returns, is there a way for me to pair each object in
 the returned merge_obj with the object it was created from?
 
 For example:
 merged_obj = session.merge(object)
 
 At the top level, it is trivial, merged_obj was created because of the
 instance object
 
 For single RelationProperties under the top level, it is fairly
 simple, too.
 
 That is:
 
 merged.childattr was merged from object.childattr
 
 Where it falls apart I think is if the RelationProperty.use_list ==
 True
 
 merged.list came from object.list, but is there a way for me to
 reference the original objects inside the list.
 
 Did merged.list[0] come from object.list[0] or object.list[1] or
 object_list[2]?
 
 I particularly can't use the pk because it won't always be set (often
 this will be a new record)
 
 Any suggestions?
 
 the ordering of those lists (assuming they are lists and not sets) are 
 deterministic, especially with regards to the pending objects that have 
 been added as a result of your merge (i.e. the ones that wont have 
 complete primary keys).   I would match them up based on comparison of the 
 list of instances that are transient/pending.
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Warning: Incorrect datetime value

2010-02-10 Thread Bobby Impollonia
I am constructing queries involving MSTimeStamp fields in MySQL and am
being receiving Incorrect datetime value warnings even in
situations where my queries are valid in MySQL.

How do I modify the following query so that sqlalchemy will accept it
without warning:
session.query(MyClass.id).filter(MyClass.timestamp  (func.now() -
2)).first()
?

The generated sql is
SELECT my_table.id AS my_table_id
FROM my_table
WHERE my_table.timestamp  now() - %s
 LIMIT 0, 1

The warning I get is
/usr/lib/pymodules/python2.5/sqlalchemy/engine/default.py:123:
Warning: Incorrect datetime value: '20100209953011.00' for column
'timestamp' at row 1
 cursor.execute(statement, parameters)

Why is sqlalchemy even seeing the value '20100209953011.00', which
is the outcome of the now() - 2 operation inside the query?

I tried changing
(func.now() -  2)
to
func.timestamp (func.now() -  2)
but I still get the same warning.

If I change 2 to 200, I don't get warned. Why would this be?

It works with no warning if I change it to
session.query(MyClass.id).filter(func.now() - MyClass.timestamp  2).first()
, but that is not an acceptable solution because now MySQL can't use
my index on MyClass.timestamp.

I am using SQLA .5.5

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: pairing merged object with the original

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 4:36 PM, Kent wrote:

 Further, if I inspect the returned object *directly* after the call to
 merge(), then aren't I guaranteed any Relations with use_list=True
 have will have the same length, since that is the point of merge in
 the first place?

you can assume the lengths are the same.   I'm not sure though why you arent 
using the attributes.get_history() function I showed you which would allow you 
to see anything that changed directly.   seems a lot simpler than what you're 
trying to do.

 
 That being the case, I can always simply correspond the merged index
 with the original instances, correct (regardless of whether it is a
 newly created object or was get()'ed from the database)?
 
 Correct?

Yeah looking at the source its actually wholesale replacing the list on the 
target object so its a direct copy.  I had the notion that it was appending to 
the list but I was incorrect.




 
 
 On Feb 10, 4:28 pm, Kent k...@retailarchitects.com wrote:
 Very good, thanks.
 
 Although, I'm pretty sure I understand what you are saying, what
 exactly do you mean by pending/transients?
 
 On Feb 10, 4:13 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 
 On Feb 10, 2010, at 3:52 PM, Kent wrote:
 
 If I understand you correctly, you are saying
 object.list[0] will always cause creation (or fetch) of merged.list[0]
 object.list[1] will always cause creation (or fetch) of merged.list[1]
 etc.
 
 There may be also more merged.list[2], [3], etc...
 
 Correct?
 
 This is the merge code 0.5.8:
 
if self.uselist:
dest_list = []
for current in instances:
_recursive[(current, self)] = True
obj = session._merge(current, dont_load=dont_load,
 _recursive=_recursive)
if obj is not None:
dest_list.append(obj)
if dont_load:
coll = attributes.init_collection(dest_state,
 self.key)
for c in dest_list:
coll.append_without_event(c)
else:
getattr(dest.__class__,
 self.key).impl._set_iterable(dest_state, dest_dict, dest_list)
 
 Can I rely this implementation remaining ordered (deterministic), even
 if it is re-written for optimization purposes or something?
 
 as long as you're using lists for your relations' collection 
 implementations there's no reason the order of pending/transients would 
 change.  The objects coming back from the DB are not deterministic unless 
 you add order_by to your relation, but thats why i said process those 
 separately.
 
 Also, I see that if obj is None, then dest_list.append() won't be
 called, which would mess up my indexes.  I am wondering is there a
 more sure mechanism?  Under what circumstances will obj be None?
 
 There's no codepath I can see where that can be None and there's no test 
 that generates a None at that point, I'm not really sure why that check is 
 there.   I'd want to dig back to find its origins before removing it but 
 _merge() pretty explicitly doesn't return None these days.
 
 On Feb 10, 3:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 10, 2010, at 2:49 PM, Kent wrote:
 
 After merge() returns, is there a way for me to pair each object in
 the returned merge_obj with the object it was created from?
 
 For example:
 merged_obj = session.merge(object)
 
 At the top level, it is trivial, merged_obj was created because of the
 instance object
 
 For single RelationProperties under the top level, it is fairly
 simple, too.
 
 That is:
 
 merged.childattr was merged from object.childattr
 
 Where it falls apart I think is if the RelationProperty.use_list ==
 True
 
 merged.list came from object.list, but is there a way for me to
 reference the original objects inside the list.
 
 Did merged.list[0] come from object.list[0] or object.list[1] or
 object_list[2]?
 
 I particularly can't use the pk because it won't always be set (often
 this will be a new record)
 
 Any suggestions?
 
 the ordering of those lists (assuming they are lists and not sets) are 
 deterministic, especially with regards to the pending objects that have 
 been added as a result of your merge (i.e. the ones that wont have 
 complete primary keys).   I would match them up based on comparison of 
 the list of instances that are transient/pending.
 
 --
 You received this message because you are subscribed to the Google 
 Groups sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 

[sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-10 Thread Kent

When I do something simple like this script:

o=Order()
o.orderid = 'KBORDE'
ol=OrderDetail()
ol.lineid=1 # exists in database
o.orderdetails=[ol]
mo=DBSession.merge(o)

mo.orderdetails[0] in DBSession.new

mo.orderdetails[0].saleprice = 65

DBSession.flush()

(output pasted below)=

I get output that is not what I hoped for in that, because of merge()
not being aware of allow_null_pks with composite keys:
mo.orderdetails[0] in DBSession.new == True.

This is making validation, etc troublesome for me, since I was
inspecting DBSession.new to indicate whether the record exists in the
database.  The flush() works it out correctly in the end and sqla does
an update instead of insert, but inspecting DBSession.new is incorrect
semantically.

If you make merge() aware of allow_partial_pks in 0.6, will
mo.orderdetails[0] in DBSession.new == False then?

(In a previous post using merge() with composite key, you mentioned
this:

Your assessment of the issue is correct, in that the reconcilation of
l1/l2 orderid does not occur within merge so it remains None.   This
behavior is not intentional, except to the degree that merge() was not
intended to run through the dependency rules which occur during a
flush,
instead expecting to receive objects with fully composed primary
keys.
It's not immediately apparent to me what degree of rearchitecture of
the
unit of work would be required for this behavior to be added, or if it
is
even a good idea.  I understand the argument in favor.  That doesn't
mean
there aren't arguments in opposition, just that they aren't
immediately
obvious.

see 
http://groups.google.com/group/sqlalchemy/browse_thread/thread/20b199b4f78e7cad)

So I am wondering now if this is the same issue and will it be changed
(fixed) in 0.6?

If so and in the meantime, is there a workaround I could apply to
merge() or is it not very straightforward?

(I can also hack up my userland code to workaround this in the
meantime, but ultimately wanted to know whether this will all be
solved and if there is an easy patch I could apply until then...)


Pasted output to the above script:

 o=Order()
 o.orderid = 'KBORDE'
 ol=OrderDetail()
 ol.lineid=1
 o.orderdetails=[ol]
 mo=DBSession.merge(o)
10:09:22,607 INFO  [sqlalchemy.engine.base.Engine.0x...2190] BEGIN
/home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.5.8.01-
py2.6.egg/sqlalchemy/engine/default.py:242: SAWarning: Unicode type
received non-unicode bind param value 'KBORDE'
  param[key.encode(encoding)] = processors[key](compiled_params[key])
10:09:22,617 INFO  [sqlalchemy.engine.base.Engine.0x...2190] SELECT
orders.orderid AS orders_orderid, orders.type AS orders_type,
orders.orderdate AS orders_orderdate, orders.status AS orders_status,
orders.customerid AS orders_customerid, orders.ordersite AS
orders_ordersite, orders.salesperson1 AS orders_salesperson1,
orders.commisspercent1 AS orders_commisspercent1, orders.deliverytype
AS orders_deliverytype, orders.deliverydate AS orders_deliverydate,
orders.mainorder AS orders_mainorder, orders.sequence AS
orders_sequence, orders.massfinalize AS orders_massfinalize, (SELECT
sum(od__a.qtyordered * od__a.saleprice) AS sum_1
FROM orderdetails AS od__a
WHERE orders.orderid = od__a.orderid) AS totalsale,
products_1.productid AS products_1_productid, products_1.brand AS
products_1_brand, products_1.description AS products_1_description,
products_1.regular AS products_1_regular, products_1.sale AS
products_1_sale, products_1.onhand AS products_1_onhand,
products_1.onorder AS products_1_onorder, products_1.imageurl AS
products_1_imageurl, products_1.special AS products_1_special,
products_1.featured AS products_1_featured, products_1.newproduct AS
products_1_newproduct, orderdetails_1.orderid AS
orderdetails_1_orderid, orderdetails_1.lineid AS
orderdetails_1_lineid, orderdetails_1.productid AS
orderdetails_1_productid, orderdetails_1.qtyordered AS
orderdetails_1_qtyordered, orderdetails_1.saleprice AS
orderdetails_1_saleprice, orderdetails_1.voided AS
orderdetails_1_voided, orderdetails_1.commissiontype AS
orderdetails_1_commissiontype, orderdetails_1.mainorder AS
orderdetails_1_mainorder, orderdetails_1.picked AS
orderdetails_1_picked, customers_1.customerid AS
customers_1_customerid, customers_1.phonenumber AS
customers_1_phonenumber, customers_1.firstname AS
customers_1_firstname, customers_1.lastname AS customers_1_lastname,
customers_1.address1 AS customers_1_address1, customers_1.address2 AS
customers_1_address2, customers_1.city AS customers_1_city,
customers_1.state AS customers_1_state, customers_1.zip AS
customers_1_zip, customers_1.email AS customers_1_email,
customers_1.type AS customers_1_type, customers_1.accountopendate AS
customers_1_accountopendate
FROM orders LEFT OUTER JOIN orderdetails AS orderdetails_1 ON
orders.orderid = orderdetails_1.orderid JOIN products AS products_1 ON
products_1.productid = orderdetails_1.productid LEFT OUTER JOIN
customers AS customers_1 ON 

[sqlalchemy] Re: pairing merged object with the original

2010-02-10 Thread Kent


On Feb 10, 6:59 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 10, 2010, at 4:36 PM, Kent wrote:

  Further, if I inspect the returned object *directly* after the call to
  merge(), then aren't I guaranteed any Relations with use_list=True
  have will have the same length, since that is the point of merge in
  the first place?

 you can assume the lengths are the same.   I'm not sure though why you arent 
 using the attributes.get_history() function I showed you which would allow 
 you to see anything that changed directly.   seems a lot simpler than what 
 you're trying to do.



I am using this handy function, actually, and I could explain why, in
this case, I need more, but that is really a whole more discussion (in
short, the original object has some plain python attributes I need to
read which merge() does not copy for me because the are not mapper
properties, so I need a reference to the original object)



  That being the case, I can always simply correspond the merged index
  with the original instances, correct (regardless of whether it is a
  newly created object or was get()'ed from the database)?

  Correct?

 Yeah looking at the source its actually wholesale replacing the list on the 
 target object so its a direct copy.  I had the notion that it was appending 
 to the list but I was incorrect.



Thanks.


  On Feb 10, 4:28 pm, Kent k...@retailarchitects.com wrote:
  Very good, thanks.

  Although, I'm pretty sure I understand what you are saying, what
  exactly do you mean by pending/transients?

  On Feb 10, 4:13 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  On Feb 10, 2010, at 3:52 PM, Kent wrote:

  If I understand you correctly, you are saying
  object.list[0] will always cause creation (or fetch) of merged.list[0]
  object.list[1] will always cause creation (or fetch) of merged.list[1]
  etc.

  There may be also more merged.list[2], [3], etc...

  Correct?

  This is the merge code 0.5.8:

         if self.uselist:
             dest_list = []
             for current in instances:
                 _recursive[(current, self)] = True
                 obj = session._merge(current, dont_load=dont_load,
  _recursive=_recursive)
                 if obj is not None:
                     dest_list.append(obj)
             if dont_load:
                 coll = attributes.init_collection(dest_state,
  self.key)
                 for c in dest_list:
                     coll.append_without_event(c)
             else:
                 getattr(dest.__class__,
  self.key).impl._set_iterable(dest_state, dest_dict, dest_list)

  Can I rely this implementation remaining ordered (deterministic), even
  if it is re-written for optimization purposes or something?

  as long as you're using lists for your relations' collection 
  implementations there's no reason the order of pending/transients would 
  change.  The objects coming back from the DB are not deterministic unless 
  you add order_by to your relation, but thats why i said process those 
  separately.

  Also, I see that if obj is None, then dest_list.append() won't be
  called, which would mess up my indexes.  I am wondering is there a
  more sure mechanism?  Under what circumstances will obj be None?

  There's no codepath I can see where that can be None and there's no test 
  that generates a None at that point, I'm not really sure why that check 
  is there.   I'd want to dig back to find its origins before removing it 
  but _merge() pretty explicitly doesn't return None these days.

  On Feb 10, 3:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Feb 10, 2010, at 2:49 PM, Kent wrote:

  After merge() returns, is there a way for me to pair each object in
  the returned merge_obj with the object it was created from?

  For example:
  merged_obj = session.merge(object)

  At the top level, it is trivial, merged_obj was created because of the
  instance object

  For single RelationProperties under the top level, it is fairly
  simple, too.

  That is:

  merged.childattr was merged from object.childattr

  Where it falls apart I think is if the RelationProperty.use_list ==
  True

  merged.list came from object.list, but is there a way for me to
  reference the original objects inside the list.

  Did merged.list[0] come from object.list[0] or object.list[1] or
  object_list[2]?

  I particularly can't use the pk because it won't always be set (often
  this will be a new record)

  Any suggestions?

  the ordering of those lists (assuming they are lists and not sets) are 
  deterministic, especially with regards to the pending objects that have 
  been added as a result of your merge (i.e. the ones that wont have 
  complete primary keys).   I would match them up based on comparison of 
  the list of instances that are transient/pending.

  --
  You received this message because you are subscribed to the Google 
  Groups sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To 

Re: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 6:05 PM, Jeff Peterson wrote:

  
 It’s touching a ton of tables, hundreds…if I had to guess I’d say every table 
 in the schema.  The reasons for this are unknown to me, certainly all those 
 tables are not related specifically to the single view I am attempting to 
 reflect. 

that shouldn't be possible on a single table reflect, if its a view.  Views 
have no foreign key metadata so it would have no reason to go anywhere else.   
you'd have to provide more specifics in order for us to see how that might be 
reproduced.


  
 The other thing I am noticing is the code in the lib does this:
  
 reflectTables(metadata)
 createTables(metadata)
 notify(DatabaseSetupEvent(metadata))
 
 reflectTables does, among other things, a metadata.reflect(bind=engine)
  
 createTables looks like:
  
 def createTables(metadata):
 Create class-specified tables.
 
 engine = Engine()
 metadata.create_all(engine)
  
  
 So, the metadata has (or could have) both reflected tables/views and tables 
 needing to be created, which would be fine assuming the reflected class is 
 actually a table in the DB, which we know in this case it isn’t it’s a view 
 and so it tried to create it.  So, the issue is what’s the best solution for 
 stopping this?  Should the reflected views simply be removed from the 
 metadata after reflection? Or is there some simpler answer?

as i said, it makes no sense at all for your web app to be calling create_all() 
when it starts up.   create_all() should only be used when you are initially 
configuring your database or migrating its schema, things that should not 
happen implicitly.

If you must call create_all(), use a separate MetaData object for your 
reflection step.   Its not a good idea to attempt to issue CREATE TABLE for all 
the tables that you've just reflected.






  
 TIA,
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Collation and Column()

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 5:04 PM, Christoph Burgmer wrote:

 Having collations on a per-column basis in MySQL and SQLite, I'd like
 to specify the collation when creating tables/views. I have been
 trying google and source for some hours now, and it seems there's just
 no way to handle it easily with SA.
 MySQL has support via private VARCHAR, but SQLite has nothing similar.
 Generic support is absent.
 
 Could a simple CP of this functionality give SQLite similar support?
 I'd file a patch then.
 
 Is there any other way to select a collation via SA, either globally,
 or via text() or similar?
 
 I'd hate to completly rewrite the CREATE TABLE statements.

if SQLite supports a COLLATE syntax, then sure we can accept patches for 0.6 
/trunk.

If you want instant gratification on sqlite just build yourself a 
UserDefinedType for now (again 0.6/trunk).




 
 -Christoph
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] simplest pattern to query a database

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 5:24 PM, Tarek Ziadé wrote:

 Hi,
 
 I am using this pattern to execute a simple query on a database:
 
 def execute(sqluri, query):
engine = create_engine(sqluri)
connection = engine.connect()
try:
connection.execute(query)
finally:
connection.close()
 
 I was wondering if this was the best pattern, or if something simpler exists..

engine.execute(query) is certainly simpler.But don't create_engine() many 
times in one application for the same URL, thats like configuring a log handler 
everytime you need to log something.




 
 
 Regards
 Tarek
 
 -- 
 Tarek Ziadé | http://ziade.org
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Warning: Incorrect datetime value

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 6:52 PM, Bobby Impollonia wrote:

 I am constructing queries involving MSTimeStamp fields in MySQL and am
 being receiving Incorrect datetime value warnings even in
 situations where my queries are valid in MySQL.
 
 How do I modify the following query so that sqlalchemy will accept it
 without warning:
 session.query(MyClass.id).filter(MyClass.timestamp  (func.now() -
 2)).first()
 ?
 
 The generated sql is
 SELECT my_table.id AS my_table_id
 FROM my_table
 WHERE my_table.timestamp  now() - %s
 LIMIT 0, 1
 
 The warning I get is
 /usr/lib/pymodules/python2.5/sqlalchemy/engine/default.py:123:
 Warning: Incorrect datetime value: '20100209953011.00' for column
 'timestamp' at row 1
  cursor.execute(statement, parameters)
 
 Why is sqlalchemy even seeing the value '20100209953011.00', which
 is the outcome of the now() - 2 operation inside the query?
 
 I tried changing
 (func.now() -  2)
 to
 func.timestamp (func.now() -  2)
 but I still get the same warning.
 
 If I change 2 to 200, I don't get warned. Why would this be?
 
 It works with no warning if I change it to
 session.query(MyClass.id).filter(func.now() - MyClass.timestamp  
 2).first()
 , but that is not an acceptable solution because now MySQL can't use
 my index on MyClass.timestamp.

That warning is not generated by SQLAlchemy so it is not seeing it.  
MySQL-python is generating that warning.   You'd have to consult the DBAPI's 
mailing list for help on this one since SQLA just passes your bound values 
straight through - I'm not familiar with what that 2 might mean in MySQL 
parlance, 2 seconds ?  no clue.





 
 I am using SQLA .5.5
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] backref questions

2010-02-10 Thread David Ressman
Hello,

First of all, I'll apologize if this is a really basic question. I've not been 
using SQLAlchemy for long, and I've only very recently picked up Python. Even 
though I've looked everywhere I can think of for an answer to this question, 
I'm almost certain that it is not a difficult one. I'm happy to read through 
any documentation you can point me to, but I've not been able to see what might 
be relevant to this particular question.

That said, here goes. Here's the relevant bit of code:

-- BEGIN CODE --

Base = declarative_base()

class User(Base):
__tablename__ = 'user'

id = Column(Integer, primary_key=True)
uid = Column(Integer, unique=True)

def __init__(self, uid):
self.uid = uid

def __repr__(self):
return(User('uid: %d')) % (self.uid)


class Filesystem(Base):
__tablename__ = 'filesystem'

id = Column(Integer, primary_key=True)
filesystem = Column(String(255))
mountpoint = Column(String(255))

def __init__(self, filesystem, mountpoint):
self.filesystem = filesystem
self.mountpoint = mountpoint

def __repr__(self):
return(Filesystem('%s', '%s')) % (self.filesystem, self.mountpoint)


class Usage(Base):
__tablename__ = 'usage_data'

id   = Column(Integer, primary_key=True)
fs_id= Column(None, ForeignKey('filesystem.id'))
user_id  = Column(None, ForeignKey('user.id'))
datetime = Column(DateTime)
inodes   = Column(MSBigInteger, nullable=False)
kbytes   = Column(MSBigInteger, nullable=False)

user = relation(User, backref=backref('usage_data', order_by=datetime))
fs = relation(Filesystem, backref=backref('usage_data', order_by=datetime))

def __init__(self, user, fs, datetime, inodes, kbytes):
self.user = user
self.fs = fs
self.datetime = datetime
self.inodes = inodes
self.kbytes = kbytes

def __repr__(self):
return(Usage('%s', '%s', '%s', %d:%d KB)) % (self.user, self.fs,
   self.datetime, self.inodes, self.kbytes)

-- END CODE --

So you can kind of see what it does. This bit is a rudimentary filesystem space 
tracking application. You have a list of users in the 'user' table, a list of 
filesystems in the 'filesystem' table, and then you have an application that is 
periodically inserting records into the 'usage_data' table. Each record 
includes a user reference, a filesystem reference, and some data.

The backrefs from Usage objects work perfectly. When I have a User object, say 
user_obj, I can access user_obj.usage_data and get a datetime-ordered list of 
all Usage objects associated with this particular user.

What I want is an easy way to access a user's n most recent Usage objects for 
each filesystem. (For the purpose of this e-mail, we can take n=1.) It would be 
easy enough for me to just take the last m records in the user_obj.usage_data 
list (where m is the number of filesystems for which this user has records), 
but that's not really what I want. One filesystem might be storing records 
every hour, and another might be storing them every day. In that case, it would 
be hard to know how many records I would need to take from user_obj.usage_data 
to have the most recent record from each filesystem.

It wouldn't be hard to actually use a session object to build a query for this, 
but I'd really like to have this all taken care of in the 
objects/maps/relations/whatever themselves. I'd like to have some attribute in 
the User object that's like user_obj.usage_data, but instead gives me a list of 
only the most recent Usage object from each filesystem, so I'd see something 
like:

 user_obj.most_recent_usage
  [Usage('user', 'fs1', 'some-date', foo:bar),
   Usage('user', 'fs2', 'some-other-date', f0o:bAr),
   Usage('user', 'fs3', 'some-third-date', fo0:b4r)]

I hope I was clear in my description. If I've left anything out, I'll be happy 
to clarify.

Thanks!!

David




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] backref questions

2010-02-10 Thread Michael Trier
Hi,

On Feb 10, 2010, at 6:54 PM, David Ressman wrote:

 Hello,
 
 First of all, I'll apologize if this is a really basic question. I've not 
 been using SQLAlchemy for long, and I've only very recently picked up Python. 
 Even though I've looked everywhere I can think of for an answer to this 
 question, I'm almost certain that it is not a difficult one. I'm happy to 
 read through any documentation you can point me to, but I've not been able to 
 see what might be relevant to this particular question.
 ...

 What I want is an easy way to access a user's n most recent Usage objects for 
 each filesystem. (For the purpose of this e-mail, we can take n=1.) It would 
 be easy enough for me to just take the last m records in the 
 user_obj.usage_data list (where m is the number of filesystems for which this 
 user has records), but that's not really what I want. One filesystem might be 
 storing records every hour, and another might be storing them every day. In 
 that case, it would be hard to know how many records I would need to take 
 from user_obj.usage_data to have the most recent record from each filesystem.
 
 It wouldn't be hard to actually use a session object to build a query for 
 this, but I'd really like to have this all taken care of in the 
 objects/maps/relations/whatever themselves. I'd like to have some attribute 
 in the User object that's like user_obj.usage_data, but instead gives me a 
 list of only the most recent Usage object from each filesystem, so I'd see 
 something like:
 
 user_obj.most_recent_usage
  [Usage('user', 'fs1', 'some-date', foo:bar),
   Usage('user', 'fs2', 'some-other-date', f0o:bAr),
   Usage('user', 'fs3', 'some-third-date', fo0:b4r)]
 
 I hope I was clear in my description. If I've left anything out, I'll be 
 happy to clarify.

What you likely want to dig into is Query enabled properties 
(http://www.sqlalchemy.org/docs/mappers.html?highlight=property%20association#building-query-enabled-properties).
  I used this a lot of times to tie what appears to be a relationship but that 
is driven by an underlying query.  I don't have time right now to whip 
up the action property, but if you play with it a bit I'm sure you'll be able 
to get there.

Michael

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] backref questions

2010-02-10 Thread David Ressman
On Feb 10, 2010, at 7:38 PM, Michael Trier wrote:
 What you likely want to dig into is Query enabled properties 
 (http://www.sqlalchemy.org/docs/mappers.html?highlight=property%20association#building-query-enabled-properties).
   I used this a lot of times to tie what appears to be a relationship but 
 that is driven by an underlying   query.  I don't have time right now to 
 whip up the action property, but if you play with it a bit I'm sure you'll be 
 able to get there.


Perfect! I added these to my User object, and it works great:

def _recent_usage(self, last_num_items=1):
session = object_session(self)

return_list = []

fs_list = session.query(Filesystem).all()
for filesystem in fs_list:
usage_objs = session.query(Usage).\
   filter_by(user=self).\
   filter_by(fs=filesystem).\
   order_by(desc(Usage.datetime)).\
   limit(last_num_items).\
   all()

for usage in usage_objs:
return_list.append(usage)

return(return_list)

last_usage_objects = property(_recent_usage)

Thanks very much for your help!

David

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-10 Thread Michael Bayer

On Feb 10, 2010, at 7:10 PM, Kent wrote:

 
 When I do something simple like this script:
 
 o=Order()
 o.orderid = 'KBORDE'
 ol=OrderDetail()
 ol.lineid=1 # exists in database
 o.orderdetails=[ol]
 mo=DBSession.merge(o)
 
 mo.orderdetails[0] in DBSession.new
 
 mo.orderdetails[0].saleprice = 65
 
 DBSession.flush()
 
 (output pasted below)=
 
 I get output that is not what I hoped for in that, because of merge()
 not being aware of allow_null_pks with composite keys:
 mo.orderdetails[0] in DBSession.new == True.
 
 This is making validation, etc troublesome for me, since I was
 inspecting DBSession.new to indicate whether the record exists in the
 database.  The flush() works it out correctly in the end and sqla does
 an update instead of insert, but inspecting DBSession.new is incorrect
 semantically.

um, if i understand OrderDetail pk is the combination of orderid and lineid, 
you may have to set orderid on your OrderDetail object before merging it.   
the merge process currently does not populate foreign key columns before 
testing for the primary key.



 
 If you make merge() aware of allow_partial_pks in 0.6, will
 mo.orderdetails[0] in DBSession.new == False then?
 
 (In a previous post using merge() with composite key, you mentioned
 this:
 
 Your assessment of the issue is correct, in that the reconcilation of
 l1/l2 orderid does not occur within merge so it remains None.   This
 behavior is not intentional, except to the degree that merge() was not
 intended to run through the dependency rules which occur during a
 flush,
 instead expecting to receive objects with fully composed primary
 keys.
 It's not immediately apparent to me what degree of rearchitecture of
 the
 unit of work would be required for this behavior to be added, or if it
 is
 even a good idea.  I understand the argument in favor.  That doesn't
 mean
 there aren't arguments in opposition, just that they aren't
 immediately
 obvious.
 
 see 
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/20b199b4f78e7cad)
 
 So I am wondering now if this is the same issue and will it be changed
 (fixed) in 0.6?
 
 If so and in the meantime, is there a workaround I could apply to
 merge() or is it not very straightforward?
 
 (I can also hack up my userland code to workaround this in the
 meantime, but ultimately wanted to know whether this will all be
 solved and if there is an easy patch I could apply until then...)
 
 
 Pasted output to the above script:
 
 o=Order()
 o.orderid = 'KBORDE'
 ol=OrderDetail()
 ol.lineid=1
 o.orderdetails=[ol]
 mo=DBSession.merge(o)
 10:09:22,607 INFO  [sqlalchemy.engine.base.Engine.0x...2190] BEGIN
 /home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.5.8.01-
 py2.6.egg/sqlalchemy/engine/default.py:242: SAWarning: Unicode type
 received non-unicode bind param value 'KBORDE'
  param[key.encode(encoding)] = processors[key](compiled_params[key])
 10:09:22,617 INFO  [sqlalchemy.engine.base.Engine.0x...2190] SELECT
 orders.orderid AS orders_orderid, orders.type AS orders_type,
 orders.orderdate AS orders_orderdate, orders.status AS orders_status,
 orders.customerid AS orders_customerid, orders.ordersite AS
 orders_ordersite, orders.salesperson1 AS orders_salesperson1,
 orders.commisspercent1 AS orders_commisspercent1, orders.deliverytype
 AS orders_deliverytype, orders.deliverydate AS orders_deliverydate,
 orders.mainorder AS orders_mainorder, orders.sequence AS
 orders_sequence, orders.massfinalize AS orders_massfinalize, (SELECT
 sum(od__a.qtyordered * od__a.saleprice) AS sum_1
 FROM orderdetails AS od__a
 WHERE orders.orderid = od__a.orderid) AS totalsale,
 products_1.productid AS products_1_productid, products_1.brand AS
 products_1_brand, products_1.description AS products_1_description,
 products_1.regular AS products_1_regular, products_1.sale AS
 products_1_sale, products_1.onhand AS products_1_onhand,
 products_1.onorder AS products_1_onorder, products_1.imageurl AS
 products_1_imageurl, products_1.special AS products_1_special,
 products_1.featured AS products_1_featured, products_1.newproduct AS
 products_1_newproduct, orderdetails_1.orderid AS
 orderdetails_1_orderid, orderdetails_1.lineid AS
 orderdetails_1_lineid, orderdetails_1.productid AS
 orderdetails_1_productid, orderdetails_1.qtyordered AS
 orderdetails_1_qtyordered, orderdetails_1.saleprice AS
 orderdetails_1_saleprice, orderdetails_1.voided AS
 orderdetails_1_voided, orderdetails_1.commissiontype AS
 orderdetails_1_commissiontype, orderdetails_1.mainorder AS
 orderdetails_1_mainorder, orderdetails_1.picked AS
 orderdetails_1_picked, customers_1.customerid AS
 customers_1_customerid, customers_1.phonenumber AS
 customers_1_phonenumber, customers_1.firstname AS
 customers_1_firstname, customers_1.lastname AS customers_1_lastname,
 customers_1.address1 AS customers_1_address1, customers_1.address2 AS
 customers_1_address2, customers_1.city AS customers_1_city,
 customers_1.state AS 

[sqlalchemy] correlate on Query in 0.4

2010-02-10 Thread Kalium
Hi, 0.4 doesn't seem to support correlate() on a Query object. I have
a subquery (which is actually just another Query object), so at no
point can I actually do a correlate(). Any other good options other
than upgrading to 0.5 or using a select ?

Cheers

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.