Re: [sqlalchemy] Lazy load collection from a batch of objects with one query

2014-11-04 Thread Kevin S
Yes, I would say that is an acceptable solution to me. My current attempt, 
which I think might work, is to grab all the primary keys off the original 
list of objects, then use those on an IN clause where I load only the 
primary key (by deferring everything else) of the parent and the desired 
collection. Then I can go through this second list and link the collections 
back to my original list based on primary key. This might not work all the 
time, but in our case all the Parent objects only have single column 
primary keys, so I think it will be ok.

So the basic strategy is this:
1) Do initial query for list of Parents
2) get list of primary keys from initialParents
3) query a second list of parents, using the primary keys on IN clause, 
with only primary key and collection loaded (I think I used add_entity to 
achieve this)
4) set the collections attribute of every initialParent that has a match in 
the secondParents list.

I am also curious about this 3rd party extension, in case it is more 
robust, what is it called?

On Monday, November 3, 2014 3:26:50 PM UTC-5, Michael Bayer wrote:

 Well to load multiple collections at once you need to give a list of 
 parent instances, were you hoping it would use an IN to achieve that or 
 otherwise what SQL are you looking for?   There is a 3rd party extension 
 which does some of this and I was hoping to someday implement the IN style 
 of loading...which also is a slight bit more feasible to do in a delayed 
 fashion though I don't know that the 3rd party extension does this.

 Sent from my iPhone

 On Nov 3, 2014, at 2:52 PM, Kevin S kevin...@gmail.com javascript: 
 wrote:

 Ok, I think I understand some of the reasoning behind not wanting to 
 implement such a feature. I think this is fine, because I don't need the 
 magic that hibernate provides. 

 However, I'm still stuck with the current problem of how to do this kind 
 of collection populating with our antique sybase instance. The kind of sub 
 queries needed to do what subqueryload provides, even if I am able to 
 modify the dialect, are probably not possible in our database version. 
 However, I know the lazy loading works for a single entity. Is there 
 anything I could easily modify to have subqueryload (or some extension of 
 it) trigger a second query, instead of wrapping the original search with a 
 left outer join? I imagine if this were implemented behind the scenes as 
 two separate requests, then you wouldn't have the same kind of ambiguous 
 cases you described above.

 Anyway, I'm not asking that you implement something into SQLAlchemy, more 
 looking for guidance on a way to support this kind of query (possibly via 
 some manual extension of the API). Sure, I could just write a specific 
 query to do this, but I want an approach that utilizes the details of each 
 collection's relationship, since this paradigm will be common in our app. 
 Even if it's a bit ugly to do, I think we'd be fine with it, since we only 
 need to support sybase for another year. Then we can migrate our app to 
 Postgres.


 On Saturday, November 1, 2014 1:33:26 PM UTC-4, Michael Bayer wrote:


  On Nov 1, 2014, at 11:53 AM, Kevin S kevin...@gmail.com wrote: 
  
  I think I've seen this sort of functionality in Hibernate, where it 
 tries to optimize if you are going to access some lazy loaded property on a 
 collection of objects. It can load them in batches, as opposed to one query 
 for every object. 


 we offer the “subqueryload” feature.  However this load is not 
 “on-demand” when an attribute is accessed, it’s established up front as the 
 total collection of objects are loaded. 

 Hibernate’s feature of doing this on demand is not something I can get my 
 head around, really.It links the loading of a collection to how the 
 object was loaded in the first place, and to the query that loaded it.I 
 can hardly count the number of ambiguous edge cases that come to mind with 
 that.   What if query A loads objects X, Y, Z, query B loads objects P, Q, 
 Y, then I access P.collection, does Y.collection get loaded, and if so, 
 what if I had accessed X.collection instead?   If we say that query A 
 should stay with object Y, what about the subtle case where Y falls out of 
 scope in the user’s application, then is loaded with query B, now we have 
 Y.collection that may be loaded from A or may be loaded from B based on a 
 subtlety of memory management which will confuse users. It’s not just 
 that a query of X.collection or P.collection can trigger Y.collection - 
 queries A and B might have different options set up which can change how 
 related objects past Y.collection are loaded too.Overall, how do I 
 manage memory, if 1000 objects were loaded with 1000 queries I have to 
 carry around all 1000 query objects with each object?What if the object 
 is serialized?  What if the object was expired, does that expire that it’s 
 connection to its original query? 

 I would imagine that in 

[sqlalchemy] Adjacency list + Abstract Base Class Inheritance used in relationship

2014-11-04 Thread delijati
Hello,

i posted my question on stakoverflow. So to not repeat myself:

https://stackoverflow.com/questions/26724897/adjacency-list-abstract-base-class-inheritance-used-in-relationship

Josip

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Lazy load collection from a batch of objects with one query

2014-11-04 Thread Michael Bayer
I was thinking of sqlalchemy-utils: 
https://github.com/kvesteri/sqlalchemy-utils 
https://github.com/kvesteri/sqlalchemy-utils  , as I recall Konsta showing me 
the loader he wrote, but now I’m not seeing it here.   I’ll keep poking 
around.

You might want to take a look at the example for 
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/DisjointEagerLoading 
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/DisjointEagerLoading,
 which illustrates how to use set_committed_value() which you’ll want to use if 
you’re implementing loading.


 On Nov 4, 2014, at 7:19 AM, Kevin S kevinrst...@gmail.com wrote:
 
 Yes, I would say that is an acceptable solution to me. My current attempt, 
 which I think might work, is to grab all the primary keys off the original 
 list of objects, then use those on an IN clause where I load only the primary 
 key (by deferring everything else) of the parent and the desired collection. 
 Then I can go through this second list and link the collections back to my 
 original list based on primary key. This might not work all the time, but in 
 our case all the Parent objects only have single column primary keys, so I 
 think it will be ok.
 
 So the basic strategy is this:
 1) Do initial query for list of Parents
 2) get list of primary keys from initialParents
 3) query a second list of parents, using the primary keys on IN clause, with 
 only primary key and collection loaded (I think I used add_entity to achieve 
 this)
 4) set the collections attribute of every initialParent that has a match in 
 the secondParents list.
 
 I am also curious about this 3rd party extension, in case it is more robust, 
 what is it called?
 
 On Monday, November 3, 2014 3:26:50 PM UTC-5, Michael Bayer wrote:
 Well to load multiple collections at once you need to give a list of parent 
 instances, were you hoping it would use an IN to achieve that or otherwise 
 what SQL are you looking for?   There is a 3rd party extension which does 
 some of this and I was hoping to someday implement the IN style of 
 loading...which also is a slight bit more feasible to do in a delayed fashion 
 though I don't know that the 3rd party extension does this.
 
 Sent from my iPhone
 
 On Nov 3, 2014, at 2:52 PM, Kevin S kevin...@ gmail.com 
 http://gmail.com/ wrote:
 
 Ok, I think I understand some of the reasoning behind not wanting to 
 implement such a feature. I think this is fine, because I don't need the 
 magic that hibernate provides. 
 
 However, I'm still stuck with the current problem of how to do this kind of 
 collection populating with our antique sybase instance. The kind of sub 
 queries needed to do what subqueryload provides, even if I am able to modify 
 the dialect, are probably not possible in our database version. However, I 
 know the lazy loading works for a single entity. Is there anything I could 
 easily modify to have subqueryload (or some extension of it) trigger a 
 second query, instead of wrapping the original search with a left outer 
 join? I imagine if this were implemented behind the scenes as two separate 
 requests, then you wouldn't have the same kind of ambiguous cases you 
 described above.
 
 Anyway, I'm not asking that you implement something into SQLAlchemy, more 
 looking for guidance on a way to support this kind of query (possibly via 
 some manual extension of the API). Sure, I could just write a specific query 
 to do this, but I want an approach that utilizes the details of each 
 collection's relationship, since this paradigm will be common in our app. 
 Even if it's a bit ugly to do, I think we'd be fine with it, since we only 
 need to support sybase for another year. Then we can migrate our app to 
 Postgres.
 
 
 On Saturday, November 1, 2014 1:33:26 PM UTC-4, Michael Bayer wrote:
 
  On Nov 1, 2014, at 11:53 AM, Kevin S kevin...@gmail.com  wrote: 
  
  I think I've seen this sort of functionality in Hibernate, where it tries 
  to optimize if you are going to access some lazy loaded property on a 
  collection of objects. It can load them in batches, as opposed to one 
  query for every object. 
 
 
 we offer the “subqueryload” feature.  However this load is not “on-demand” 
 when an attribute is accessed, it’s established up front as the total 
 collection of objects are loaded. 
 
 Hibernate’s feature of doing this on demand is not something I can get my 
 head around, really.It links the loading of a collection to how the 
 object was loaded in the first place, and to the query that loaded it.I 
 can hardly count the number of ambiguous edge cases that come to mind with 
 that.   What if query A loads objects X, Y, Z, query B loads objects P, Q, 
 Y, then I access P.collection, does Y.collection get loaded, and if so, what 
 if I had accessed X.collection instead?   If we say that query A should stay 
 with object Y, what about the subtle case where Y falls out of scope in the 
 user’s application, then is loaded with query B, 

Re: [sqlalchemy] Adjacency list + Abstract Base Class Inheritance used in relationship

2014-11-04 Thread Michael Bayer
there is a lot lot lot going on here. The example isn’t working in 1.0 for 
different reasons, for example.

However lets start with just the error you have, and to do that, lets please 
just show the minimal amount of code to reproduce:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.declarative import declared_attr, AbstractConcreteBase


Base = declarative_base()


class Mammut(Base):
__tablename__ = mammut

id = Column(Integer, primary_key=True)
nodes = relationship(
'TreeNode',
backref='mammut',
)


class TreeNode(AbstractConcreteBase, Base):
id = Column(Integer, primary_key=True)
name = Column(String(50), nullable=False)
depth = Column(Integer, default=0)
data_type = Column(String(50))

@declared_attr
def mammut_id(cls):
return Column(Integer, ForeignKey('mammut.id'))

def __init__(self, name, value=None, parent=None):
self.name = name
self.parent = parent
self.depth = 0
self.value = value
if self.parent:
self.depth = self.parent.depth + 1


class IntTreeNode(TreeNode):
value = Column(Integer)

__tablename__ = 'int'
__mapper_args__ = {concrete: True, polymorphic_identity: 'int'}


class FloatTreeNode(TreeNode):
value = Column(Float)
miau = Column(String(50), default='zuff')

__tablename__ = 'float'
__mapper_args__ = {concrete: True, polymorphic_identity: 'float'}

node = IntTreeNode('rootnode', value=2)

mut = Mammut()
mut.nodes.append(node)

The issue is that you can’t just do a single “backref” to concrete classes.  
You have to use the instructions at 
http://docs.sqlalchemy.org/en/rel_0_9/orm/inheritance.html#relationships-with-concrete-inheritance
 
http://docs.sqlalchemy.org/en/rel_0_9/orm/inheritance.html#relationships-with-concrete-inheritance
 to create each reference from IntTreeNode / FloatTreeNode etc. back.

This has been improved in 1.0.   We can say:

class Mammut(Base):
__tablename__ = mammut

id = Column(Integer, primary_key=True)
nodes = relationship(
'TreeNode',
back_populates='mammut',
)

class TreeNode(AbstractConcreteBase, Base):
id = Column(Integer, primary_key=True)
name = Column(String(50), nullable=False)
depth = Column(Integer, default=0)
data_type = Column(String(50))

@declared_attr
def mammut_id(cls):
return Column(Integer, ForeignKey('mammut.id'))

@declared_attr
def mammut(cls):
return relationship(Mammut, back_populates='nodes')

def __init__(self, name, value=None, parent=None):
self.name = name
self.parent = parent
self.depth = 0
self.value = value
if self.parent:
self.depth = self.parent.depth + 1

however in 0.9, this won’t work, and after some experimentation I don’t think 
it’s possible in 0.9 to have a backref pointing to an AbstractConcreteBase, 
sorry.

Also, the attempt to make a self-referential relationship from TreeNode to 
itself is also not possible in the way you are attempting.  There is no 
TreeNode table, so this would require distinct relationships and foreign keys 
on each concrete table, however I’m not getting that to work either.

I think you might not intend to use AbstractConcreteBase here in any case as it 
seems like you want there to be a base table (this creates one?).


I’ll try to look more later but overall there’s kind of too much going on here 
and concrete inheritance is not very easy to use, sorry.












 On Nov 4, 2014, at 7:28 AM, delijati delij...@gmail.com wrote:
 
 Hello,
 
 i posted my question on stakoverflow. So to not repeat myself:
 
 https://stackoverflow.com/questions/26724897/adjacency-list-abstract-base-class-inheritance-used-in-relationship
 
 Josip
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com 
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com 
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy 
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout 
 https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Adjacency list + Abstract Base Class Inheritance used in relationship

2014-11-04 Thread Michael Bayer
OK, coffee has been applied.This mapping can be done in 0.9 but only if you 
use classical mappings, AbstractConcreteBase and declarative aren’t ready yet.  
 In 1.0, I made a lot of improvements (see 
http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html#improvements-to-declarative-mixins-declared-attr-and-related-features
 
http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html#improvements-to-declarative-mixins-declared-attr-and-related-features)
 that manage to get this to work using a fully declarative approach.  A minimal 
example is below.  In particular, you *don’t* want to have a “polymorphic_on” 
column added to any specific table when using concrete mappings, there’s no 
base table that’s shared.  The AbstractConcreteBase produces this 
“polymorphic_on” for you, and it’s only part of the “polymorphic union” query 
you see in the SQL output of the script:


from sqlalchemy import Column, String, Integer, create_engine, ForeignKey, Float
from sqlalchemy.orm import Session, relationship
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm.collections import attribute_mapped_collection
from sqlalchemy.ext.declarative import declared_attr, AbstractConcreteBase


Base = declarative_base()


class Mammut(Base):
__tablename__ = mammut

id = Column(Integer, primary_key=True)
nodes = relationship(
'TreeNode',
lazy='dynamic',
back_populates='mammut',
)


class TreeNode(AbstractConcreteBase, Base):
id = Column(Integer, primary_key=True)
name = Column(String)

@declared_attr
def __tablename__(cls):
if cls.__name__ == 'TreeNode':
return None
else:
return cls.__name__.lower()

@declared_attr
def __mapper_args__(cls):
return {'polymorphic_identity': cls.__name__, 'concrete': True}

@declared_attr
def parent_id(cls):
return Column(Integer, ForeignKey(cls.id))

@declared_attr
def mammut_id(cls):
return Column(Integer, ForeignKey('mammut.id'))

@declared_attr
def mammut(cls):
return relationship(Mammut, back_populates=nodes)

@declared_attr
def children(cls):
return relationship(
cls,
back_populates=parent,
collection_class=attribute_mapped_collection('name'),
)

@declared_attr
def parent(cls):
return relationship(
cls, remote_side=%s.id % cls.__name__,
back_populates='children')


class IntTreeNode(TreeNode):
value = Column(Integer)


class FloatTreeNode(TreeNode):
value = Column(Float)
miau = Column(String(50), default='zuff')

e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)

session = Session(e)

root = IntTreeNode(name='root')
IntTreeNode(name='n1', parent=root)
n2 = IntTreeNode(name='n2', parent=root)
IntTreeNode(name='n2n1', parent=n2)

m1 = Mammut()
m1.nodes.append(n2)
m1.nodes.append(root)

session.add(root)
session.commit()


session.close()

root = session.query(TreeNode).filter_by(name='root').one()
print root.children


that polymorphic query at the bottom looks like the following, note that the 
“type”, which is our polymorphic_on, is a virtual column:

SELECT pjoin.id AS pjoin_id, pjoin.name AS pjoin_name, pjoin.value AS 
pjoin_value, pjoin.miau AS pjoin_miau, pjoin.mammut_id AS pjoin_mammut_id, 
pjoin.parent_id AS pjoin_parent_id, pjoin.type AS pjoin_type 
FROM (SELECT floattreenode.id AS id, floattreenode.name AS name, 
floattreenode.value AS value, floattreenode.miau AS miau, 
floattreenode.mammut_id AS mammut_id, floattreenode.parent_id AS parent_id, 
'FloatTreeNode' AS type 
FROM floattreenode UNION ALL SELECT inttreenode.id AS id, inttreenode.name AS 
name, inttreenode.value AS value, CAST(NULL AS VARCHAR(50)) AS miau, 
inttreenode.mammut_id AS mammut_id, inttreenode.parent_id AS parent_id, 
'IntTreeNode' AS type 
FROM inttreenode) AS pjoin 
WHERE pjoin.name = ?








 On Nov 4, 2014, at 10:33 AM, Michael Bayer mike...@zzzcomputing.com wrote:
 
 there is a lot lot lot going on here. The example isn’t working in 1.0 
 for different reasons, for example.
 
 However lets start with just the error you have, and to do that, lets please 
 just show the minimal amount of code to reproduce:
 
 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.ext.declarative import declared_attr, AbstractConcreteBase
 
 
 Base = declarative_base()
 
 
 class Mammut(Base):
 __tablename__ = mammut
 
 id = Column(Integer, primary_key=True)
 nodes = relationship(
 'TreeNode',
 backref='mammut',
 )
 
 
 class TreeNode(AbstractConcreteBase, Base):
 id = Column(Integer, primary_key=True)
 name = Column(String(50), nullable=False)
 depth = Column(Integer, default=0)
 data_type = Column(String(50))
 
 @declared_attr
 def mammut_id(cls):
 return 

Re: [sqlalchemy] Lazy load collection from a batch of objects with one query

2014-11-04 Thread Kevin S
Ah yes, perfect!

The set_committed_value was the last piece I needed, to prevent lazy 
loading while setting the new values from the IN clause query. I like that 
DisjointEagerLoading recipe, as it pretty much captures the same idea. 

Oh, and perhaps I should say it again in case anyone in the future reads 
this. This was to support batch attribute loading on an old Sybase 
database. Given the option, subqueryload is much preferred, and we will use 
that method once we can migrate to Postgres.


Thanks!

On Tuesday, November 4, 2014 9:54:59 AM UTC-5, Michael Bayer wrote:

 I was thinking of sqlalchemy-utils: 
 https://github.com/kvesteri/sqlalchemy-utils  , as I recall Konsta 
 showing me the loader he wrote, but now I’m not seeing it here.   I’ll 
 keep poking around.

 You might want to take a look at the example for 
 https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/DisjointEagerLoading,
  
 which illustrates how to use set_committed_value() which you’ll want to use 
 if you’re implementing loading.


 On Nov 4, 2014, at 7:19 AM, Kevin S kevin...@gmail.com javascript: 
 wrote:

 Yes, I would say that is an acceptable solution to me. My current attempt, 
 which I think might work, is to grab all the primary keys off the original 
 list of objects, then use those on an IN clause where I load only the 
 primary key (by deferring everything else) of the parent and the desired 
 collection. Then I can go through this second list and link the collections 
 back to my original list based on primary key. This might not work all the 
 time, but in our case all the Parent objects only have single column 
 primary keys, so I think it will be ok.

 So the basic strategy is this:
 1) Do initial query for list of Parents
 2) get list of primary keys from initialParents
 3) query a second list of parents, using the primary keys on IN clause, 
 with only primary key and collection loaded (I think I used add_entity to 
 achieve this)
 4) set the collections attribute of every initialParent that has a match 
 in the secondParents list.

 I am also curious about this 3rd party extension, in case it is more 
 robust, what is it called?

 On Monday, November 3, 2014 3:26:50 PM UTC-5, Michael Bayer wrote:

 Well to load multiple collections at once you need to give a list of 
 parent instances, were you hoping it would use an IN to achieve that or 
 otherwise what SQL are you looking for?   There is a 3rd party extension 
 which does some of this and I was hoping to someday implement the IN style 
 of loading...which also is a slight bit more feasible to do in a delayed 
 fashion though I don't know that the 3rd party extension does this.

 Sent from my iPhone

 On Nov 3, 2014, at 2:52 PM, Kevin S kevin...@gmail.com wrote:

 Ok, I think I understand some of the reasoning behind not wanting to 
 implement such a feature. I think this is fine, because I don't need the 
 magic that hibernate provides. 

 However, I'm still stuck with the current problem of how to do this kind 
 of collection populating with our antique sybase instance. The kind of sub 
 queries needed to do what subqueryload provides, even if I am able to 
 modify the dialect, are probably not possible in our database version. 
 However, I know the lazy loading works for a single entity. Is there 
 anything I could easily modify to have subqueryload (or some extension of 
 it) trigger a second query, instead of wrapping the original search with a 
 left outer join? I imagine if this were implemented behind the scenes as 
 two separate requests, then you wouldn't have the same kind of ambiguous 
 cases you described above.

 Anyway, I'm not asking that you implement something into SQLAlchemy, more 
 looking for guidance on a way to support this kind of query (possibly via 
 some manual extension of the API). Sure, I could just write a specific 
 query to do this, but I want an approach that utilizes the details of each 
 collection's relationship, since this paradigm will be common in our app. 
 Even if it's a bit ugly to do, I think we'd be fine with it, since we only 
 need to support sybase for another year. Then we can migrate our app to 
 Postgres.


 On Saturday, November 1, 2014 1:33:26 PM UTC-4, Michael Bayer wrote:


  On Nov 1, 2014, at 11:53 AM, Kevin S kevin...@gmail.com wrote: 
  
  I think I've seen this sort of functionality in Hibernate, where it 
 tries to optimize if you are going to access some lazy loaded property on a 
 collection of objects. It can load them in batches, as opposed to one query 
 for every object. 


 we offer the “subqueryload” feature.  However this load is not 
 “on-demand” when an attribute is accessed, it’s established up front as the 
 total collection of objects are loaded. 

 Hibernate’s feature of doing this on demand is not something I can get 
 my head around, really.It links the loading of a collection to how the 
 object was loaded in the first place, and to the query that loaded it.I 
 

[sqlalchemy] objects not necessarily pulled from session's identity map when they should (?)

2014-11-04 Thread Jonathan Vanasco
I've been going batty on this all morning.

I have a permissions check routine that repeatedly queries for a certain 
Foo2Bar table

class Foo2Bar(Base):
__tablename__ = 'foo_2_bar'
id_foo = Column(Integer, ForeignKey(foo.id), primary_key=True)
id_bar = Column(Integer, ForeignKey(bar.id), primary_key=True)

print Get Foo2Bar() 4x
print id(s.query(Foo2Bar).get((1,2)))
print id(s.query(Foo2Bar).get((1,2)))
print id(s.query(Foo2Bar).get((1,2)))
print id(s.query(Foo2Bar).get((1,2)))

When I do this on a test harness, it works exactly like it should.  It only 
hits the database once.

When I do this in my app, it hits the database (postgres) unreliably.

I set up extensive logging and used a bunch of breakpoints with pdb.  
The session is the same at every point, there is an object already in the 
identity map for the composite key -- but it appears to get overridden each 
time.

If i put a `get()` in a loop 40x, it only gets the first one.  but every 
prior request is pulled from the database.

does anyone have a clue what could cause this behavior?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] objects not necessarily pulled from session's identity map when they should (?)

2014-11-04 Thread Claudio Freire
On Tue, Nov 4, 2014 at 3:15 PM, Jonathan Vanasco jvana...@gmail.com wrote:

 I have a permissions check routine that repeatedly queries for a certain
 Foo2Bar table

 class Foo2Bar(Base):
 __tablename__ = 'foo_2_bar'
 id_foo = Column(Integer, ForeignKey(foo.id), primary_key=True)
 id_bar = Column(Integer, ForeignKey(bar.id), primary_key=True)

 print Get Foo2Bar() 4x
 print id(s.query(Foo2Bar).get((1,2)))
 print id(s.query(Foo2Bar).get((1,2)))
 print id(s.query(Foo2Bar).get((1,2)))
 print id(s.query(Foo2Bar).get((1,2)))

 When I do this on a test harness, it works exactly like it should.  It only
 hits the database once.

 When I do this in my app, it hits the database (postgres) unreliably.

...

 does anyone have a clue what could cause this behavior?

The code as shown, would likely exhibit that behavior with a weak
identity map. It would indeed be hard to predict when it happened
since weakrefs are cleared on garbage collection cycles, and those
happen at hard to predict times.

You could rule that out, by storing the result of get() in some
variable somewhere for the duration of the test/routine. That should
keep it in the identity map long enough to serve your purposes.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] objects not necessarily pulled from session's identity map when they should (?)

2014-11-04 Thread Jonathan Vanasco
Thanks!

I didn't realize that objects are cleaned up with scope like normal python 
objects.  I thought they were in the session for the lifetime of the 
session.  This was driving me crazy.

This is just a web request, so I'm now appending the result into 
`request.persistanceArray`.  instantly fixed my problem.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] simple join to parent with group_by

2014-11-04 Thread nathan
I’m trying to get a simple join working to a parent table with group_by. I’ve 
tried some variations working from the ORM tutorial but keep running into 
exceptions or I get multiple queries.

I’m trying to get a single query that results in a collection of Client 
instances that’s joined to and grouped by the parent Affiliate. So an Affiliate 
has one or more Clients. And I’d like to have a report like:

Affiliate 1:
Client 1
Client 2
Affiliate 2:
Client 3
Client 4

Here’s the model:

class Affiliate(Base):
__tablename__ = 'affiliate'
id   = Column(Integer, primary_key=True)
name = Column(Unicode(50), unique=True, nullable=False, index=True)

class Client(Base):
__tablename__ = 'client'
id  = Column(Integer, primary_key=True)
affiliateid = Column(Integer, ForeignKey('affiliate.id'), nullable=False, 
index=True)
name= Column(Unicode(50), unique=False, nullable=False, index=True)
affiliate   = relationship('Affiliate', lazy='select’)

I’ve tried variations on the code below, but here’s what I have now:

DBSession.\
query(Client).\
options(joinedload('affiliate')).\
group_by(Affiliate).\
order_by(Client.name).\
all()

and the exception:

ProgrammingError: (ProgrammingError) invalid reference to FROM-clause entry for 
table affiliate
LINE 2: ...1 ON affiliate_1.id = client.affiliateid GROUP BY affiliate
 ^
HINT:  Perhaps you meant to reference the table alias affiliate_1.
 'SELECT client.id AS client_id, client.affiliateid AS client_affiliateid, 
client.name AS client_name, affiliate_1.id AS affiliate_1_id, affiliate_1.name 
AS affiliate_1_name, \nFROM client LEFT OUTER JOIN affiliate AS affiliate_1 ON 
affiliate_1.id = client.affiliateid GROUP BY affiliate.id, affiliate.name, 
ORDER BY client.name' {}

The “HINT” is awesome, but unfortunately I’m not clueful enough to use it. :(

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] simple join to parent with group_by

2014-11-04 Thread Michael Bayer
query.options(joinedload()) is not used for creating joins, you need you use 
query.join().

See 
http://docs.sqlalchemy.org/en/rel_0_9/faq.html#i-m-using-joinedload-or-lazy-false-to-create-a-join-outer-join-and-sqlalchemy-is-not-constructing-the-correct-query-when-i-try-to-add-a-where-order-by-limit-etc-which-relies-upon-the-outer-join
 
http://docs.sqlalchemy.org/en/rel_0_9/faq.html#i-m-using-joinedload-or-lazy-false-to-create-a-join-outer-join-and-sqlalchemy-is-not-constructing-the-correct-query-when-i-try-to-add-a-where-order-by-limit-etc-which-relies-upon-the-outer-join
 to start and then  
http://docs.sqlalchemy.org/en/rel_0_9/orm/loading.html#the-zen-of-eager-loading 
http://docs.sqlalchemy.org/en/rel_0_9/orm/loading.html#the-zen-of-eager-loading
 which attempts to explain the difference.




 On Nov 2, 2014, at 3:28 PM, nathan nathanma...@gmail.com wrote:
 
 I’m trying to get a simple join working to a parent table with group_by. I’ve 
 tried some variations working from the ORM tutorial but keep running into 
 exceptions or I get multiple queries.
 
 I’m trying to get a single query that results in a collection of Client 
 instances that’s joined to and grouped by the parent Affiliate. So an 
 Affiliate has one or more Clients. And I’d like to have a report like:
 
 Affiliate 1:
 Client 1
 Client 2
 Affiliate 2:
 Client 3
 Client 4
 
 Here’s the model:
 
 class Affiliate(Base):
 __tablename__ = 'affiliate'
 id   = Column(Integer, primary_key=True)
 name = Column(Unicode(50), unique=True, nullable=False, index=True)
 
 class Client(Base):
 __tablename__ = 'client'
 id  = Column(Integer, primary_key=True)
 affiliateid = Column(Integer, ForeignKey('affiliate.id'), nullable=False, 
 index=True)
 name= Column(Unicode(50), unique=False, nullable=False, 
 index=True)
 affiliate   = relationship('Affiliate', lazy='select’)
 
 I’ve tried variations on the code below, but here’s what I have now:
 
 DBSession.\
 query(Client).\
 options(joinedload('affiliate')).\
 group_by(Affiliate).\
 order_by(Client.name).\
 all()
 
 and the exception:
 
 ProgrammingError: (ProgrammingError) invalid reference to FROM-clause entry 
 for table affiliate
 LINE 2: ...1 ON affiliate_1.id = client.affiliateid GROUP BY affiliate
  ^
 HINT:  Perhaps you meant to reference the table alias affiliate_1.
  'SELECT client.id AS client_id, client.affiliateid AS client_affiliateid, 
 client.name AS client_name, affiliate_1.id AS affiliate_1_id, 
 affiliate_1.name AS affiliate_1_name, \nFROM client LEFT OUTER JOIN affiliate 
 AS affiliate_1 ON affiliate_1.id = client.affiliateid GROUP BY affiliate.id, 
 affiliate.name, ORDER BY client.name' {}
 
 The “HINT” is awesome, but unfortunately I’m not clueful enough to use it. :(
 
 Thanks
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com 
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com 
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy 
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout 
 https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] behavior of multiple correlated subqueries different from 0.7.10 to 0.9.8

2014-11-04 Thread Meelap Shah
I am trying to perform a query in which I from multiple correlated 
subqueries. My code works with 0.7.10, but not with 0.9.8.


Base = declarative_base()
class User(Base):
  __tablename__ = 'user'
  id = Column(Integer, primary_key=True)
  name = Column(String)

class Action(Base):
  __tablename__ = 'action'
  id = Column(Integer, primary_key=True)
  user_id = Column(Integer, ForeignKey(User.id))
  action = Column(String)
  date = Column(DateTime)

engine = create_engine('sqlite:///:memory:')
Base.metadata.create_all(bind=engine)
sess = sessionmaker(bind=engine)()

sess.add(User(id=1, name='Meelap'))
sess.flush()
sess.add(Action(id=1, user_id=1, action=login, date=datetime.now()))
sess.flush()

num_actions = sess.query(func.count()).filter(Action.user_id == User.id).
correlate(User).as_scalar()
first_action = sess.query(func.min(Action.date)).filter(Action.user_id == 
User.id).correlate(User).as_scalar()
q1 = sess.query(User.name, num_actions, first_action)

print sqlalchemy.__version__
print
print str(q1)
print
print q1.all()


With 0.7.10, it works as I expect with the User table being dropped from 
the FROMs of the subqueries.

0.7.10

SELECT user.name AS user_name, (SELECT count(*) AS count_1 
FROM action 
WHERE action.user_id = user.id) AS anon_1, (SELECT min(action.date) AS 
min_1 
FROM action 
WHERE action.user_id = user.id) AS anon_2 
FROM user

[(u'Meelap', 1, datetime.datetime(2014, 11, 4, 19, 37, 15, 260873))]


With 0.9.8, the str(query) is the same with User having been dropped from 
subqueries, but the error output shows a different subquery being executed 
that selects from both Action and User.

0.9.8

SELECT user.name AS user_name, (SELECT count(*) AS count_1 
FROM action 
WHERE action.user_id = user.id) AS anon_1, (SELECT min(action.date) AS 
min_1 
FROM action 
WHERE action.user_id = user.id) AS anon_2 
FROM user

Traceback (most recent call last):
  File sql.py, line 39, in module
print q1.all()
  File 
/home/ubuntu/.virtualenvs/env/lib/python2.7/site-packages/sqlalchemy/orm/query.py
, line 2320, in all
return list(self)
  File 
/home/ubuntu/.virtualenvs/env/lib/python2.7/site-packages/sqlalchemy/orm/loading.py
, line 76, in instances
labels) for row in fetch]
  File 
/home/ubuntu/.virtualenvs/env/lib/python2.7/site-packages/sqlalchemy/orm/query.py
, line 3523, in proc
return row[column]
  File 
/home/ubuntu/.virtualenvs/env/lib/python2.7/site-packages/sqlalchemy/engine/result.py
, line 331, in _key_fallback
expression._string_or_unprintable(key))
sqlalchemy.exc.NoSuchColumnError: 'Could not locate column in row for 
column \'(SELECT count(*) AS count_1 \nFROM action, user \nWHERE 
action.user_id = user.id)\''



I searched the changelogs but haven't found anything that could explain 
this and I'd appreciate any help. Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.