[sqlalchemy] sqlalchemy limit by clause

2007-04-17 Thread Disrupt07

I am defining my result list the following way:

result = queryobject.select(mytable.c.columnname == columnname, limit
= maxResults, offset = 0)

maxResults is the number of records I want returned.  In case I want
all the records, I am setting maxResults as -1.  However, this is not
working for me and nothing is being returned.  How can I set the query
so that a -1 limit means return all records?

Thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy limit by clause

2007-04-17 Thread svilen

default of limit is None.
All those query.select arguments go in a orm.query.QueryContext()
do see its __init__ for which arg has what default.

On Tuesday 17 April 2007 09:31:04 Disrupt07 wrote:
 I am defining my result list the following way:

 result = queryobject.select(mytable.c.columnname == columnname,
 limit = maxResults, offset = 0)

 maxResults is the number of records I want returned.  In case I
 want all the records, I am setting maxResults as -1.  However, this
 is not working for me and nothing is being returned.  How can I set
 the query so that a -1 limit means return all records?

 Thanks


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: splitting a relation into multiple properties

2007-04-17 Thread Michael Bayer
attached is an approach that uses just the tiniest amount of  
awareness of how a mapper works, to do the whole optimized loading  
scenario ahead of time, and you get an object with your two distinct  
pro and con relationships, no list decoration or anything  
needed.  the appropriateness of this approach comes from the fact  
that your optimization case is a load-time optimization, therefore  
put the complexity at load time (which is easier to deal with since  
it doesnt have to be concerned with collection mutability).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import *

class SomeClass(object):pass

class Position(object):
def __init__(self, data, type):
self.data = data
self.type = type

metadata = BoundMetaData('sqlite://', echo=True)

t1 = Table(t1, metadata, 
Column('id', Integer, primary_key=True),
Column('data', String(30)))

t2 = Table('t2', metadata, 
Column('id', Integer, primary_key=True),
Column('parent_id', Integer, ForeignKey('t1.id')),
Column('data', String(30)),
Column('type', String(20))
)
metadata.create_all()

class MyExt(MapperExtension):
def populate_instance(self, mapper, context, row, instance, identitykey, isnew):
# tell the mapper to populate the instance like it always does
mapper.populate_instance(context, instance, row, identitykey, isnew)

# get the row decorator that will translate the eager loaded row to something
# based off the columns in t2
decorator = context.attributes[id(mapper.props['_positions'].strategy)]
row = decorator(row)

# determine pro or con - create instance and append to either collection
if row[t2.c.type] == 'pro':
instance.pro.append(class_mapper(Position)._instance(context, row, None))
elif row[t2.c.type] == 'con':
instance.con.append(class_mapper(Position)._instance(context, row, None))

# alternatively!  get the instance that was just appended to _positions
# if _positions[-1].type == 'pro':
#  instance.pro.append(_positions[-1])
#  ...etc

# tell the calling mapper populate_instance is taken care of
return None

mapper(SomeClass, t1, extension=MyExt(), properties={
'_positions':relation(Position, lazy=False, viewonly=True),
'pro':relation(Position, lazy=None, primaryjoin=and_(t1.c.id==t2.c.parent_id,t2.c.type=='pro')),
'con':relation(Position, lazy=None, primaryjoin=and_(t1.c.id==t2.c.parent_id,t2.c.type=='con'))
})
mapper(Position, t2)

sess = create_session()
sc =SomeClass()
sc.pro.append(Position(tastes great, pro))
sc.pro.append(Position(less filling, pro))
sc.con.append(Position(beer sucks, con))
sess.save(sc)
sess.flush()
sess.clear()

sc = sess.query(SomeClass).get(sc.id)
assert [x.data for x in sc.pro] == [tastes great, less filling]
assert [x.data for x in sc.con] == [beer sucks]

On Apr 16, 2007, at 8:00 PM, jason kirtland wrote:


 Michael wrote:
 On Apr 12, 2007, at 2:03 PM, jason kirtland wrote:
 [...]
 This does work, but because relation updates are happening
 outside of the InstrumentedList (i.e. not on 'analysis'
 directly), I'm losing the events that would normally be
 triggered.  I don't think I can manually manage them either, as
 they're private __methods on InstrumentedList.

 some ideas which im not sure if theyd work, one is to not use
 collection_class and to go with an approach that is more like
 the   AssociationProxy - i.e. pro and con have special
 collections on   them which proxy to the utlimate associations
 colleciton, but on   top of the InstrumentedList instead of
 underneath it the way   collection_class does.

 I've got a new implementation that uses this approach and it does
 work- but it is even more complex.  If the InstrumentedCollection
 refactor (#213) allows implementors to choose when to fire add/del
 events I think this sort of pattern can be made pretty simple.

 On that front, I think it would be super useful if future
 InstrumentedCollection classes have access to their relation()
 arguments- e.g. a DRY ordered collection that keys off the
 relation's order_by.  A chance to error out at definition time
 would be useful too.

 -jek


 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google  
 Groups sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com
 To unsubscribe from this group, send email to sqlalchemy- 
 [EMAIL PROTECTED]
 For more options, visit this group at 

[sqlalchemy] Re: [PATCH] Filtered one_to_many relationships (Experimental)

2007-04-17 Thread Gaetan de Menten

On 4/17/07, Michael Bayer [EMAIL PROTECTED] wrote:

 On Apr 10, 2007, at 10:22 AM, Gaetan de Menten wrote:

  By the way, should I create a ticket for this?

 ive created ticket #541 for this,

I had already created ticket #531 for this. Sorry for not mentioning
it here (I thought you'd see it). Anyway, it can be closed too now :).

 and implemented a slightly refined
 version of the patch you provided, including the classmethod as well
 as a generative method + unit tests for both.  I added docs and
 examples for the generative version, so you can see the use cases i
 was getting at.  it will throw an error if you give it input that
 makes no sense (i.e. two classes that are unrelated).

 the pattern looks like (assuming the user-address paradigm) :

 # without specifying the property
 session.query(Address).with_parent(someuser).list()

 # specifying the property
 session.query(Address).with_parent(someuser,
 property='addresses').filter_by(email_address='[EMAIL PROTECTED]').list()

That's just great! I was going to do it eventually but it seems like
you beat me to it ;-).
Thanks a lot.
-- 
Gaƫtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [PATCH] Filtered one_to_many relationships (Experimental)

2007-04-17 Thread Michael Bayer


On Apr 17, 2007, at 2:49 PM, Gaetan de Menten wrote:


 I had already created ticket #531 for this. Sorry for not mentioning
 it here (I thought you'd see it). Anyway, it can be closed too now :).


oh crap, sorry, i did a quick search thru track for filtered and  
nothing came up.  yeah im starting to lose track of tickets now...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: splitting a relation into multiple properties

2007-04-17 Thread jason kirtland

Michael wrote:
 attached is an approach that uses just the tiniest amount of
 awareness of how a mapper works, to do the whole optimized
 loading   scenario ahead of time, and you get an object with your
 two distinct   pro and con relationships, no list decoration
 or anything   needed.  the appropriateness of this approach comes
 from the fact   that your optimization case is a load-time
 optimization, therefore   put the complexity at load time (which
 is easier to deal with since   it doesnt have to be concerned
 with collection mutability).

This is great!  So much simpler!

I'm wondering, when storing partitioned instances:

 instance.pro.append(class_mapper(Position)._instance(context, 
row, None))

...does this need to go through the instrumented append() method or 
can I add them through my own, non-list method?  (When these 
mini-collection lists are backed by a collection class that manages 
ordering attributes, it's super useful to be able to differentiate 
between appends coming from from database load vs user code.)

-jek


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: splitting a relation into multiple properties

2007-04-17 Thread Michael Bayer


On Apr 17, 2007, at 3:19 PM, jason kirtland wrote:


 Michael wrote:
 attached is an approach that uses just the tiniest amount of
 awareness of how a mapper works, to do the whole optimized
 loading   scenario ahead of time, and you get an object with your
 two distinct   pro and con relationships, no list decoration
 or anything   needed.  the appropriateness of this approach comes
 from the fact   that your optimization case is a load-time
 optimization, therefore   put the complexity at load time (which
 is easier to deal with since   it doesnt have to be concerned
 with collection mutability).

 This is great!  So much simpler!

 I'm wondering, when storing partitioned instances:

 instance.pro.append(class_mapper(Position)._instance(context,
 row, None))

 ...does this need to go through the instrumented append() method or
 can I add them through my own, non-list method?  (When these
 mini-collection lists are backed by a collection class that manages
 ordering attributes, it's super useful to be able to differentiate
 between appends coming from from database load vs user code.)

yes actually the way the eager loaders append is  something like this:

if isnew:
appender = util.UniqueAppender(l.data)
# store appender in the context
selectcontext.attributes[(instance, self.key)] = appender

then they append() to the appender, which operates on the underlying  
data.  its better that way so that events arent firing off (the two  
kinds of events currently are backrefs and session cascade operations  
for save()/update()).

you would want to store your UniqueAppender under an attribute key of  
your own and use it if one is available (or you can create it based  
on the isnew flag, which says that this parent row is the first row  
with the particular entity identity).



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Unit Of work seems to be calling save/delete twice

2007-04-17 Thread chris e

I'm not sure why. But when I do a delete/sql alchemy seems to be
running the save/delete operation twice. Could this be related to a
circular dependency in UOW that is undetected?? When deleting this is
causing the following error because the database delete is done twice:

sqlalchemy.exceptions.ConcurrentModificationError: Updated rowcount 0
does not match number of objects updated


To fix this I added the following, but it is a serious hack, and
probably needs to be addressed in the unit of work code, but I'm not
sure where to look. Code below.  I am on 0.3.5, however I tested
against 0.3.6 as well and this bug appears to be present there as
well.


in orm.unitofwork.UnitOfWork

def _remove_deleted(self, obj):
if hasattr(obj, _instance_key):

# ADDED
# ignore key errors if the item has already been
deleted
try :
del self.identity_map[obj._instance_key]
except KeyError:
pass
try:
self.deleted.remove(obj)
except KeyError:
pass
try:
self.new.remove(obj)
except KeyError:
pass



in orm.mapper.Mapper


   def delete_obj(self, objects, uowtransaction):
issue DELETE statements for a list of objects.

this is called within the context of a UOWTransaction during a
flush operation.

if self.__should_log_debug:
self.__log_debug(delete_obj() start)

connection = uowtransaction.transaction.connection(self)

[self.extension.before_delete(self, connection, obj) for obj
in objects]
deleted_objects = util.Set()
for table in self.tables.sort(reverse=True):
if not self._has_pks(table):
continue
delete = []
for obj in objects:

# ADDED
# 4/17/07
# this prevents items from being deleted twice
if hasattr(obj, '_has_been_deleted_') :
continue

params = {}
if not hasattr(obj, _instance_key):
continue
else:
delete.append(params)
for col in self.pks_by_table[table]:
params[col.key] = self.get_attr_by_column(obj,
col)
if self.version_id_col is not None:
params[self.version_id_col.key] =
self.get_attr_by_column(obj, self.version_id_col)
deleted_objects.add(obj)
if len(delete):
def comparator(a, b):
for col in self.pks_by_table[table]:
x = cmp(a[col.key],b[col.key])
if x != 0:
return x
return 0
delete.sort(comparator)
clause = sql.and_()
for col in self.pks_by_table[table]:
clause.clauses.append(col ==
sql.bindparam(col.key, type=col.type))
if self.version_id_col is not None:
clause.clauses.append(self.version_id_col ==
sql.bindparam(self.version_id_col.key, type=self.version_id_col.type))
statement = table.delete(clause)
c = connection.execute(statement, delete)
if c.supports_sane_rowcount() and c.rowcount !=
len(delete):
raise
exceptions.ConcurrentModificationError(Updated rowcount %d does not
match number of objects updated %d % (c.cursor.rowcount,
len(delete)))

# ADDED
# this prevents items from being deleted twice
for obj in deleted_objects :
obj._has_been_deleted_ = True
[self.extension.after_delete(self, connection, obj) for obj in
deleted_objects]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unit Of work seems to be calling save/delete twice

2007-04-17 Thread Michael Bayer

send me a reproducing test case, or at least let me see your  
mappings.  in particular, dont mix a mapping on a table that is also  
used as secondary in a many-to-many relationship.

On Apr 17, 2007, at 5:47 PM, chris e wrote:


 I'm not sure why. But when I do a delete/sql alchemy seems to be
 running the save/delete operation twice. Could this be related to a
 circular dependency in UOW that is undetected?? When deleting this is
 causing the following error because the database delete is done twice:

 sqlalchemy.exceptions.ConcurrentModificationError: Updated rowcount 0
 does not match number of objects updated


 To fix this I added the following, but it is a serious hack, and
 probably needs to be addressed in the unit of work code, but I'm not
 sure where to look. Code below.  I am on 0.3.5, however I tested
 against 0.3.6 as well and this bug appears to be present there as
 well.


 in orm.unitofwork.UnitOfWork

 def _remove_deleted(self, obj):
 if hasattr(obj, _instance_key):

 # ADDED
 # ignore key errors if the item has already been
 deleted
 try :
 del self.identity_map[obj._instance_key]
 except KeyError:
 pass
 try:
 self.deleted.remove(obj)
 except KeyError:
 pass
 try:
 self.new.remove(obj)
 except KeyError:
 pass



 in orm.mapper.Mapper


def delete_obj(self, objects, uowtransaction):
 issue DELETE statements for a list of objects.

 this is called within the context of a UOWTransaction during a
 flush operation.

 if self.__should_log_debug:
 self.__log_debug(delete_obj() start)

 connection = uowtransaction.transaction.connection(self)

 [self.extension.before_delete(self, connection, obj) for obj
 in objects]
 deleted_objects = util.Set()
 for table in self.tables.sort(reverse=True):
 if not self._has_pks(table):
 continue
 delete = []
 for obj in objects:

 # ADDED
 # 4/17/07
 # this prevents items from being deleted twice
 if hasattr(obj, '_has_been_deleted_') :
 continue

 params = {}
 if not hasattr(obj, _instance_key):
 continue
 else:
 delete.append(params)
 for col in self.pks_by_table[table]:
 params[col.key] = self.get_attr_by_column(obj,
 col)
 if self.version_id_col is not None:
 params[self.version_id_col.key] =
 self.get_attr_by_column(obj, self.version_id_col)
 deleted_objects.add(obj)
 if len(delete):
 def comparator(a, b):
 for col in self.pks_by_table[table]:
 x = cmp(a[col.key],b[col.key])
 if x != 0:
 return x
 return 0
 delete.sort(comparator)
 clause = sql.and_()
 for col in self.pks_by_table[table]:
 clause.clauses.append(col ==
 sql.bindparam(col.key, type=col.type))
 if self.version_id_col is not None:
 clause.clauses.append(self.version_id_col ==
 sql.bindparam(self.version_id_col.key, type=self.version_id_col.type))
 statement = table.delete(clause)
 c = connection.execute(statement, delete)
 if c.supports_sane_rowcount() and c.rowcount !=
 len(delete):
 raise
 exceptions.ConcurrentModificationError(Updated rowcount %d does not
 match number of objects updated %d % (c.cursor.rowcount,
 len(delete)))

 # ADDED
 # this prevents items from being deleted twice
 for obj in deleted_objects :
 obj._has_been_deleted_ = True
 [self.extension.after_delete(self, connection, obj) for obj in
 deleted_objects]


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---